Search Results

Search found 34688 results on 1388 pages for 'default document'.

Page 352/1388 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Integrate Nitro PDF Reader with Windows 7

    - by Matthew Guay
    Would you like a lightweight PDF reader that integrates nicely with Office and Windows 7?  Here we look at the new Nitro PDF Reader, a nice PDF viewer that also lets you create and markup PDF files. Adobe Reader is the de-facto PDF viewer, but it only lets you view PDFs and not much else.  Additionally, it doesn’t fully integrate with 64-bit editions of Vista and Windows 7.  There are many alternate PDF readers, but Nitro PDF Reader is a new entry into this field that offers more features than most PDF readers.  From the creators of the popular free PrimoPDF printer, the new Reader lets you create PDFs from a variety of file formats and markup existing PDFs with notes, highlights, stamps, and more in addition to viewing PDFs.  It also integrates great with Windows 7 using the Office 2010 ribbon interface. Getting Started Download the free Nitro PDF Reader (link below) and install as normal.  Nitro PDF Reader has separate versions for 32 & 64-bit editions of Windows, so download the correct one for your computer. Note:  Nitro PDF Reader is still in Beta testing, so only install if you’re comfortable with using beta software. On first run, Nitro PDF Reader will ask if you want to make it the default PDF viewer.  If you don’t want to, make sure to uncheck the box beside Always perform this check to keep it from opening this prompt every time you use it. It will also open an introductory PDF the first time you run it so you can quickly get acquainted with its features. Windows 7 Integration One of the first things you’ll notice is that Nitro PDF Reader integrates great with Windows 7.  The ribbon interface fits right in with native applications such as WordPad and Paint, as well as Office 2010. If you set Nitro PDF Reader as your default PDF viewer, you’ll see thumbnails of your PDFs in Windows Explorer. If you turn on the Preview Pane, you can read full PDFs in Windows Explorer.  Adobe Reader lets you do this in 32 bit versions, but Nitro PDF works in 64 bit versions too. The PDF preview even works in Outlook.  If you receive an email with a PDF attachment, you can select the PDF and view it directly in the Reading Pane.  Click the Preview file button, and you can uncheck the box at the bottom so PDFs will automatically open for preview if you want.   Now you can read your PDF attachments in Outlook without opening them separately.  This works in both Outlook 2007 and 2010. Edit your PDFs Adobe Reader only lets you view PDF files, and you can’t save data you enter in PDF forms.  Nitro PDF Reader, however, gives you several handy markup tools you can use to edit your PDFs.  When you’re done, you can save the final PDF, including information entered into forms. With the ribbon interface, it’s easy to find the tools you want to edit your PDFs. Here we’ve highlighted text in a PDF and added a note to it.  We can now save these changes, and they’ll look the same in any PDF reader, including Adobe Reader. You can also enter new text in PDFs.  This will open a new tab in the ribbon, where you can select basic font settings.  Select the Click To Finish button in the ribbon when you’re finished editing text.   Or, if you want to use the text or pictures from a PDF in another application, you can choose to extract them directly in Nitro PDF Reader.  Create PDFs One of the best features of Nitro PDF Reader is the ability to create PDFs from almost any file.  Nitro adds a new virtual printer to your computer that creates PDF files from anything you can print.  Print your file as normal, but select the Nitro PDF Creator (Reader) printer. Enter a name for your PDF, select if you want to edit the PDF properties, and click Create. If you choose to edit the PDF properties, you can add your name and information to the file, select the initial view, encrypt it, and restrict permissions. Alternately, you can create a PDF from almost any file by simply drag-and-dropping it into Nitro PDF Reader.  It will automatically convert the file to PDF and open it in a new tab in Nitro PDF. Now from the File menu you can send the PDF as an email attachment so anyone can view it. Make sure to save the PDF before closing Nitro, as it does not automatically save the PDF file.   Conclusion Nitro PDF Reader is a nice alternative to Adobe Reader, and offers some features that are only available in the more expensive Adobe Acrobat.  With great Windows 7 integration, including full support for 64-bit editions, Nitro fits in with the Windows and Office experience very nicely.  If you have tried out Nitro PDF Reader leave a comment and let us know what you think. Link Download Nitro PDF Reader Similar Articles Productive Geek Tips Install Adobe PDF Reader on Ubuntu EdgySubscribe to RSS Feeds in Chrome with a Single ClickChange Default Feed Reader in FirefoxFix for Windows Explorer Folder Pane in XP Becomes Grayed OutRemove "Please wait while the document is being prepared for reading" Message in Adobe Reader 8 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes

    Read the article

  • Using Unity – Part 3

    - by nmarun
    The previous blog was about registering and invoking different types dynamically. In this one I’d like to show how Unity manages/disposes the instances – say hello to Lifetime Managers. When a type gets registered, either through the config file or when RegisterType method is explicitly called, the default behavior is that the container uses a transient lifetime manager. In other words, the unity container creates a new instance of the type when Resolve or ResolveAll method is called. Whereas, when you register an existing object using the RegisterInstance method, the container uses a container controlled lifetime manager - a singleton pattern. It does this by storing the reference of the object and that means so as long as the container is ‘alive’, your registered instance does not go out of scope and will be disposed only after the container either goes out of scope or when the code explicitly disposes the container. Let’s see how we can use these and test if something is a singleton or a transient instance. Continuing on the same solution used in the previous blogs, I have made the following changes: First is to add typeAlias elements for TransientLifetimeManager type: 1: <typeAlias alias="transient" type="Microsoft.Practices.Unity.TransientLifetimeManager, Microsoft.Practices.Unity"/> You then need to tell what type(s) you want to be transient by nature: 1: <type type="IProduct" mapTo="Product2"> 2: <lifetime type="transient" /> 3: </type> 4: <!--<type type="IProduct" mapTo="Product2" />--> The lifetime element’s type attribute matches with the alias attribute of the typeAlias element. Now since ‘transient’ is the default behavior, you can have a concise version of the same as line 4 shows. Also note that I’ve changed the mapTo attribute from ‘Product’ to ‘Product2’. I’ve done this to help understand the transient nature of the instance of the type Product2. By making this change, you are basically saying when a type of IProduct needs to be resolved, Unity should create an instance of Product2 by default. 1: public string WriteProductDetails() 2: { 3: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}<br/>Hash Code: {3}", 4: Name, Category, MfgDate.ToString("MM/dd/yyyy hh:mm:ss tt"), GetHashCode()); 5: } Again, the above change is purely for the purpose of making the example more clear to understand. The display will show the full date and also displays the hash code of the current instance. The GetHashCode() method returns an integer when an instance gets created – a new integer for every instance. When you run the application, you’ll see something like the below: Now when you click on the ‘Get Product2 Instance’ button, you’ll see that the Mfg Date (which is set in the constructor) and the Hash Code are different from the one created on page load. This proves to us that a new instance is created every single time. To make this a singleton, we need to add a type alias for the ContainerControlledLifetimeManager class and then change the type attribute of the lifetime element to singleton. 1: <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="singleton" /> 5: </type> Running the application now gets me the following output: Click on the button below and you’ll see that the Mfg Date and the Hash code remain unchanged => the unity container is storing the reference the first time it is created and then returns the same instance every time the type needs to be resolved. Digging more deeper into this, Unity provides more than the two lifetime managers. ExternallyControlledLifetimeManager – maintains a weak reference to type mappings and instances. Unity returns the same instance as long as the some code is holding a strong reference to this instance. For this, you need: 1: <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="external" /> 5: </type> PerThreadLifetimeManager – Unity returns a unique instance of an object for each thread – so this effectively is a singleton behavior on a  per-thread basis. 1: <typeAlias alias="perThread" type="Microsoft.Practices.Unity.PerThreadLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="perThread" /> 5: </type> One thing to note about this is that if you use RegisterInstance method to register an existing object, this instance will be returned for every thread, making this a purely singleton behavior. Needless to say, this type of lifetime management is useful in multi-threaded applications (duh!!). I hope this blog provided some basics on lifetime management of objects resolved in Unity and in the next blog, I’ll talk about Injection. Please see the code used here.

    Read the article

  • How to Assign a Static IP Address in XP, Vista, or Windows 7

    - by Mysticgeek
    When organizing your home network it’s easier to assign each computer it’s own IP address than using DHCP. Here we will take a look at doing it in XP, Vista, and Windows 7. If you have a home network with several computes and devices, it’s a good idea to assign each of them a specific address. If you use DHCP (Dynamic Host Configuration Protocol), each computer will request and be assigned an address every time it’s booted up. When you have to do troubleshooting on your network, it’s annoying going to each machine to figure out what IP they have. Using Static IPs prevents address conflicts between devices and allows you to manage them more easily. Assigning IPs to Windows is essentially the same process, but getting to where you need to be varies between each version. Windows 7 To change the computer’s IP address in Windows 7, type network and sharing into the Search box in the Start Menu and select Network and Sharing Center when it comes up.   Then when the Network and Sharing Center opens, click on Change adapter settings. Right-click on your local adapter and select Properties. In the Local Area Connection Properties window highlight Internet Protocol Version 4 (TCP/IPv4) then click the Properties button. Now select the radio button Use the following IP address and enter in the correct IP, Subnet mask, and Default gateway that corresponds with your network setup. Then enter your Preferred and Alternate DNS server addresses. Here we’re on a home network and using a simple Class C network configuration and Google DNS. Check Validate settings upon exit so Windows can find any problems with the addresses you entered. When you’re finished click OK. Now close out of the Local Area Connections Properties window. Windows 7 will run network diagnostics and verify the connection is good. Here we had no problems with it, but if you did, you could run the network troubleshooting wizard. Now you can open the command prompt and do an ipconfig  to see the network adapter settings have been successfully changed.   Windows Vista Changing your IP from DHCP to a Static address in Vista is similar to Windows 7, but getting to the correct location is a bit different. Open the Start Menu, right-click on Network, and select Properties. The Network and Sharing Center opens…click on Manage network connections. Right-click on the network adapter you want to assign an IP address and click Properties. Highlight Internet Protocol Version 4 (TCP/IPv4) then click the Properties button. Now change the IP, Subnet mask, Default Gateway, and DNS Server Addresses. When you’re finished click OK. You’ll need to close out of Local Area Connection Properties for the settings to go into effect. Open the Command Prompt and do an ipconfig to verify the changes were successful.   Windows XP In this example we’re using XP SP3 Media Center Edition and changing the IP address of the Wireless adapter. To set a Static IP in XP right-click on My Network Places and select Properties. Right-click on the adapter you want to set the IP for and select Properties. Highlight Internet Protocol (TCP/IP) and click the Properties button. Now change the IP, Subnet mask, Default Gateway, and DNS Server Addresses. When you’re finished click OK. You will need to close out of the Network Connection Properties screen before the changes go into effect.   Again you can verify the settings by doing an ipconfig in the command prompt. In case you’re not sure how to do this, click on Start then Run.   In the Run box type in cmd and click OK. Then at the prompt type in ipconfig and hit Enter. This will show the IP address for the network adapter you changed.   If you have a small office or home network, assigning each computer a specific IP address makes it a lot easier to manage and troubleshoot network connection problems. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressChange Ubuntu Server from DHCP to a Static IP AddressVista Breadcrumbs for Windows XPCreate a Shortcut or Hotkey for the Safely Remove Hardware DialogCreate a Shortcut or Hotkey to Eject the CD/DVD Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • How to Play FLAC Files in Windows 7 Media Center & Player

    - by Mysticgeek
    An annoyance for music lovers who enjoy FLAC format, is there’s no native support for WMP or WMC. If you’re a music enthusiast who prefers FLAC format, we’ll look at adding support to Windows 7 Media Center and Player. For the following article we are using Windows 7 Home Premium 32-bit edition. Download and Install madFLAC v1.8 The first thing we need to do is download and install the madFLAC v1.8 decoder (link below). Just unzip the file and run install.bat… You’ll get a message that it has been successfully registered, click Ok. To verify everything is working, open up one of your FLAC files with WMP, and you’ll get the following message. Check the box Don’t ask me again for this extension and click Yes. Now Media Player should play the track you’ve chosen.   Delete Current Music Library But what if you want to add your entire collection of FLAC files to the Library? If you already have it set up as your default music player, unfortunately we need to remove the current library and delete the database. The best way to manage the music library in Windows 7 is via WMP 12. Since we don’t want to delete songs from the computer we need to Open WMP, press “Alt+T” and navigate to Tools \ Options \ Library.   Now uncheck the box Delete files from computer when deleted from library and click Ok. Now in your Library click “Ctrl + A” to highlight all of the songs in the Library, then hit the “Delete” key. If you have a lot of songs in your library (like on our system) you’ll see the following dialog box while it collects all of the information.   After all of the data is collected, make sure the radio button next to Delete from library only is marked and click Ok. Again you’ll see the Working progress window while the songs are deleted. Deleting Current Database Now we need to make sure we’re starting out fresh. Close out of Media Player, then we’ll basically follow the same directions The Geek pointed out for fixing the WMP Library. Click on Start and type in services.msc into the search box and hit Enter. Now scroll down and stop the service named Windows Media Player Network Sharing Service. Now, navigate to the following directory and the main file to delete CurrentDatabase_372.wmdb %USERPROFILE%\Local Settings\Application Data\Microsoft\Media Player\ Again, the main file to delete is CurrentDatabase_372.wmdb, though if you want, you can delete them all. If you’re uneasy about deleting these files, make sure to back them up first. Now after you restart WMP you can begin adding your FLAC files. For those of us with large collections, it’s extremely annoying to see WMP try to pick up all of your media by default. To delete the other directories go to Organize \ Manage Libraries then open the directories you want to remove. For example here we’re removing the default libraries it tries to check for music. Remove the directories you don’t want it to gather contents from in each of the categories. We removed all of the other collections and only added the FLAC music directory from our home server. SoftPointer Tag Support Plugin Even though we were able to get FLAC files to play in WMP and WMC at this point, there’s another utility from SoftPointer to add. It enables FLAC (and other file formats) to be picked up in the library much easier. It has a long name but is effective –M4a/FLAC/Ogg/Ape/Mpc Tag Support Plugin for Media Player and Media Center (link below). Just install it by accepting the defaults, and you’ll be glad you did. After installing it, and re-launching Media Player, give it some time to collect all of the data from your FLAC directory…it can take a while. In fact, if your collection is huge, just walk away and let it do its thing. If you try to use it right away, WMP slows down considerably while updating the library.   Once the library is setup you’ll be able to play your FLAC tunes in Windows 7 Media Center as well and Windows Media Player 12.   Album Art One caveat is that some of our albums didn’t show any cover art. But we were usually able to get it by right-clicking the album and selecting Find album info.   Then confirming the album information is correct…   Conclusion Although this seems like several steps to go through to play FLAC files in Windows 7 Media Center and Player, it seems to work really well after it’s set up. We haven’t tried this with a 64-bit machine, but the process should be similar, but you might want to make sure the codecs you use are 64-bit. We’re sure there are other methods out there that some of you use, and if so leave us a comment and tell us about it. Download madFlac V1.8  M4a/FLAC/Ogg/Ape/Mpc Tag Support Plugin for Media Player and Media Center from SoftPointer Similar Articles Productive Geek Tips How to Play .OGM Video Files in Windows VistaFixing When Windows Media Player Library Won’t Let You Add FilesUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Kantaris is a Unique Media Player Based on VLCEasily Change Audio File Formats with XRECODE TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7?

    Read the article

  • Convert Video and Remove Commercials in Windows 7 Media Center with MCEBuddy 1.1

    - by DigitalGeekery
    Today look at MCEBuddy for Windows 7 Media Center. This handy app automatically takes your recorded TV files and converts them to MP4, AVI, WMV, or MPEG format. It even has the option to cut out those annoying commercials during the conversion process. Installation and Configuration Download and extract MCE Buddy. (Download link below) Run the setup.exe file and take all the default settings.   Open MCEBuddy Configuration by going to Start > All Programs > MCEBuddy > MCEBuddy Configuration.   Video Paths The MCEBuddy application is comprised of a single window. The first step you’ll want to take is to define your Source and Destination paths. The “Source” will most likely be your Recorded TV directory. The Destination should NOT be the same as the Source folder. Note: The Recorded TV directory in Windows 7 Media Center will only display and play WTV & DVR-MS files. To watch the converted MP4, AVI, WMV, or MPEG files in Windows Media Center you’ll need to add them to your Video Library or Movie Library. Video Conversion Next, choose your preferred format for conversion from the “Convert to” drop down list. The default is MP4 with the H.264 codec. You’ll find a wide variety of formats. The first set of conversion options in the drop down list will resize the video to 720 pixels wide. The next two sections maintain the original size, and the final section is for a variety of portable devices.   Next, you’ll see a group of check boxes below the “Convert to” drop down list. The Commercial Skipping option will cut the commercials while converting the file. Sort By Series will create a sub-folder in your Destination folder for each TV show. Delete Original will delete the WTV file after conversion is complete. (This option is not recommended unless you are sure your files are converting properly and you no longer need the WTV file.) Start Minimized is ideal if you want to run MCEBuddy on Windows startup. Note: MCEBuddy installs and uses Comskip for commercial cutting by default. However, if you have ShowAnalyzer installed, it will use that application instead. Advanced Options To choose a specific time of day to perform the conversions, click the checkbox under the “Advanced Options,” and select the starting and ending times for conversion. For example, convert between 2 hours and 5 hours would be between 2 am and 5am. If you want MCEBuddy to constantly look for and immediately convert new recordings, leave the box unchecked.   The “Video age” option lets you choose a specific number of days to wait before performing the conversion. This can be useful if you want to watch the recordings first and delete those you don’t wish to convert. You can also choose the “Sub Directories” if you’d like MCEBuddy to convert files that are in a sub-folder in your “Source” directory. Second Conversion As you might expect, this option allows MCEBuddy to perform a second conversion of your file. This can be useful if you want to use your first conversion to create a higher quality MP4 or AVI file for playback on a larger screen, and a second one for a portable device such as Zune or iPhone. The same options from the first conversion are also available for the second. You’ll want to choose a separate Destination folder for the second conversion.   Start and Monitor Progress To start converting your video files, simply press the “Start” button at the bottom. You’ll be able to follow the progress in the “Current Activity” section. When all the video files have finished converting, or there are no current files to convert, MCEBuddy will display a “Started – Idle” status. Click “Stop” if you don’t want MCEBuddy to continue scanning for new files.   Conclusion MCEBuddy 1.1 will convert all WTV files in it’s source folder. If you want to pick and choose which recordings to convert, you may want to define a source folder different than the Recorded TV folder and then just copy or move the files you wish to convert into the new source folder. The conversion process does take a good bit of time. If you choose the commercial skipping and second conversion options it can take several hours to fully convert one TV recording. Overall, MCEBuddy makes a nice Media Center addition for those that want to save some space with smaller size files, convert Recorded TV files for their portable device, or automatically remove commercials. If you’re looking for a different method to skip commercials check out our post on how to skip commercials in Windows 7 Media Center. Download MCEBuddy 1.1 Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How To Skip Commercials in Windows 7 Media CenterHow To Convert Video Files to MP3 with VLCStartup Customizations for Media Center in Windows 7Add Folders to the Movie Library in Windows 7 Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam

    Read the article

  • WebLogic Server Performance and Tuning: Part I - Tuning JVM

    - by Gokhan Gungor
    Each WebLogic Server instance runs in its own dedicated Java Virtual Machine (JVM) which is their runtime environment. Every Admin Server in any domain executes within a JVM. The same also applies for Managed Servers. WebLogic Server can be used for a wide variety of applications and services which uses the same runtime environment and resources. Oracle WebLogic ships with 2 different JVM, HotSpot and JRocket but you can choose which JVM you want to use. JVM is designed to optimize itself however it also provides some startup options to make small changes. There are default values for its memory and garbage collection. In real world, you will not want to stick with the default values provided by the JVM rather want to customize these values based on your applications which can produce large gains in performance by making small changes with the JVM parameters. We can tell the garbage collector how to delete garbage and we can also tell JVM how much space to allocate for each generation (of java Objects) or for heap. Remember during the garbage collection no other process is executed within the JVM or runtime, which is called STOP THE WORLD which can affect the overall throughput. Each JVM has its own memory segment called Heap Memory which is the storage for java Objects. These objects can be grouped based on their age like young generation (recently created objects) or old generation (surviving objects that have lived to some extent), etc. A java object is considered garbage when it can no longer be reached from anywhere in the running program. Each generation has its own memory segment within the heap. When this segment gets full, garbage collector deletes all the objects that are marked as garbage to create space. When the old generation space gets full, the JVM performs a major collection to remove the unused objects and reclaim their space. A major garbage collect takes a significant amount of time and can affect system performance. When we create a managed server either on the same machine or on remote machine it gets its initial startup parameters from $DOMAIN_HOME/bin/setDomainEnv.sh/cmd file. By default two parameters are set:     Xms: The initial heapsize     Xmx: The max heapsize Try to set equal initial and max heapsize. The startup time can be a little longer but for long running applications it will provide a better performance. When we set -Xms512m -Xmx1024m, the physical heap size will be 512m. This means that there are pages of memory (in the state of the 512m) that the JVM does not explicitly control. It will be controlled by OS which could be reserve for the other tasks. In this case, it is an advantage if the JVM claims the entire memory at once and try not to spend time to extend when more memory is needed. Also you can use -XX:MaxPermSize (Maximum size of the permanent generation) option for Sun JVM. You should adjust the size accordingly if your application dynamically load and unload a lot of classes in order to optimize the performance. You can set the JVM options/heap size from the following places:     Through the Admin console, in the Server start tab     In the startManagedWeblogic script for the managed servers     $DOMAIN_HOME/bin/startManagedWebLogic.sh/cmd     JAVA_OPTIONS="-Xms1024m -Xmx1024m" ${JAVA_OPTIONS}     In the setDomainEnv script for the managed servers and admin server (domain wide)     USER_MEM_ARGS="-Xms1024m -Xmx1024m" When there is free memory available in the heap but it is too fragmented and not contiguously located to store the object or when there is actually insufficient memory we can get java.lang.OutOfMemoryError. We should create Thread Dump and analyze if that is possible in case of such error. The second option we can use to produce higher throughput is to garbage collection. We can roughly divide GC algorithms into 2 categories: parallel and concurrent. Parallel GC stops the execution of all the application and performs the full GC, this generally provides better throughput but also high latency using all the CPU resources during GC. Concurrent GC on the other hand, produces low latency but also low throughput since it performs GC while application executes. The JRockit JVM provides some useful command-line parameters that to control of its GC scheme like -XgcPrio command-line parameter which takes the following options; XgcPrio:pausetime (To minimize latency, parallel GC) XgcPrio:throughput (To minimize throughput, concurrent GC ) XgcPrio:deterministic (To guarantee maximum pause time, for real time systems) Sun JVM has similar parameters (like  -XX:UseParallelGC or -XX:+UseConcMarkSweepGC) to control its GC scheme. We can add -verbosegc -XX:+PrintGCDetails to monitor indications of a problem with garbage collection. Try configuring JVM’s of all managed servers to execute in -server mode to ensure that it is optimized for a server-side production environment.

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Multitask Like a Pro with AquaSnap

    - by Matthew Guay
    Are you tired of shuffling back and forth between windows?  Here’s a handy app that can help you keep all of your windows organized and accessible. AquaSnap is a great free utility that helps you use multiple windows at the same time easily and efficiently.  One of Windows 7’s greatest new features is Aero Snap, which lets you easily view windows side by side by simply dragging windows to side of your screen.  After using Windows 7 for the past year, Aero Snap is one of the features we really miss when using older versions of Windows. With AquaSnap, you now have all of the features of Aero Snap and more in Windows 2000, XP, Vista, and of course Windows 7.  Not only does it give you Aero Snap features, but AquaSnap also gives you more control over your windows to make you more productive. Getting Started AquaSnap is a a free download for Windows 2000, XP, Vista, and 7.  Download the small installer (link below) and install it with the default settings. AquaSnap automatically runs as soon as it is installed, and you will notice a new icon in your system tray. Now you can go ahead and put it to use.  Drag a window to any edge or corner of your desktop, and you will see an icon showing what part of the screen the window will cover. Dragging it to the side of the screen expanded the window to fill the right half of the screen, just like the default Aero Snap in Windows 7.  You can drag the window away to restore it to its former size. AquaSnap works on any corner of the screen too, so you can have 4 windows side-by-side.  We already have 3 windows snapped to the corners, and notice that we’re dragging a fourth window to the bottom right corner. You can also snap windows to the bottom and top of the screen.  Here we have Word snapped to the bottom half of the screen, and we’re dragging Chrome to the top. You can even snap internal windows in Multiple Document Interface (MDI) programs such as Excel.  Here we are snapping a workbook in Excel to the left to view 2 workbooks side-by-side.   Additionally, AquaSnap lets you keep any window always on top.  Simply shake any window, and it will turn semi-transparent and stay on top of all other windows.  Notice the transparent calculator here on top of Excel. All of AquaSnap’s features work great in Windows 2000, XP, and Vista too.  Here we are snapping IE6 to the left of the screen in XP. Here are 3 windows snapped to the sides in XP.  You can mix the snap modes, and have, for instance, two windows on the right side and one window on the left.  This is a great way to maximize productivity if you need more space in one of the windows. Even AquaShake works to keep a window transparent and on top in XP. Settings AquaSnap has a detailed settings dialog where you can tweak it to work exactly like you want.  Simply right-click on its icon in the taskbar, and select Settings. From the first screen, you can choose if you want AquaSnap to start with Windows, and if you want it to show an icon in the system tray.  If you turn off the system tray icon, you can access the AquaSnap settings from Start > All Programs > AquaSnap > Configuration (or simply search for Configuration in Vista or Windows 7). The second tab in settings lets you choose what you want each snapping region to do.  You can also choose two other presets, including AeroSnap (which works just like the default Aero Snap in Windows 7) and AquaSnap simple (which only snaps at the edges of the screen, not the corners). The third tab lets you increase or decrease the opacity of pinned windows when using AquaShake, and also lets you increase or decrease the shaking sensitivity.  Additionally, if you prefer the standard AeroShake functionality, which minimizes all other open windows when you shake a window, you can choose that too. The fourth tab lets you activate an optional feature, AquaGlass.  If you activate this, it will make windows turn transparent when you drag them across the screen.   Finally, the last tab lets you change the color and opacity of the preview rectangle, or simply turn it off. Or, if you want to temporarily turn AquaSnap off, simply right-click on its icon and select Off.  In Windows 7, turning off AquaSnap will restore your standard Windows Aero Snap functionality, and in other version of Windows it will stop letting you snap windows at all.  You can then repeat the steps and select On when you want to use AquaSnap again. Conclusion AquaSnap is a handy tool to make you more productive at your computer.  With a wide variety of useful features, there’s something here for everyone.  Download AquaSnap Similar Articles Productive Geek Tips How to Get Virtual Desktops on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Play Your Favorite DOS Games in XP, Vista, and Windows 7

    - by Matthew Guay
    Want to take a trip down memory lane with old school DOS games?  D-Fend Reloaded makes it easy for you to play your favorite DOS games directly on XP, Vista, and Windows 7. D-Fend Reloaded is a great frontend for DOSBox, the popular DOS emulator.  It lets you install and run many DOS games and applications directly from its interface without ever touching a DOS prompt.  It works great on XP, Vista, and Windows 7 32 & 64-bit versions.   Getting Started Download D-Fend Reloaded (link below), and install with the default settings.  You don’t need to install DOSBox, as D-Fend Reloaded will automatically install all the components you need to run DOS games on Windows. D-Fend Reloaded can also be installed as a portable application, so you can run it from a flash drive on any Windows computer by selecting User defined installation. Then select Portable mode installation. Once D-Fend Reloaded is installed, you can go ahead and open the program. Then simply click “Accept all settings” to apply the default settings.   D-Fend is now ready to run all of your favorite DOS games. Installing DOS Games and Applications: To install a DOS game or application, simply drag-and-drop a zip file of the app into D-Fend Reloaded’s window.  D-Fend Reloaded will automatically extract the program… Then will ask you to name the application and choose where to store it — by default it uses the name of the DOS app. Now you’ll see a new entry for the app you just installed.  Simply double-click to run it.   D-Fend will remind you that you can switch out of fullscreen mode by pressing Alt+Enter, and can also close the DOS application by pressing Ctrl+F9.  Press Ok to run the program. Here we’re running Ms. PacPC, a remake of the classic game Ms. Pac-Man, in full-screen mode.  All features work automatically, including sound, and you never have to setup anything from DOS command line — it just works. Here it’s in windowed mode running on Windows 7. Please note that your color scheme may change to Windows Basic while running DOS applications. You can run DOS application just as easily.  Here’s Word 5.5 running in in DOSBox through D-Fend Reloaded… Game Packs: Want to quickly install many old DOS freeware and trial games?  D-Fend Reloaded offers several game packs that let you install dozens of DOS games with only four clicks…just download and run the game pack installer of your choice (link below). Now you’ve got a selection of DOS games to choose from. Here’s a group of poor lemmings walking around … in Windows 7. Conclusion D-Fend Reloaded gives you a great way to run your favorite DOS games and applications directly from XP, Vista, and Windows 7.  Give it a try, and relive your DOS days from the comfort of your Windows desktop. What were some of your favorite DOS games and applications? Leave a comment and let us know. Links Download D-Fend Reloaded Download DOS game packs for D-Fend Reloaded Download Ms. Pac-PC Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanThursday’s Pre-Holiday Lazy Links RoundupFriday Fun: Five More Time Wasting Online GamesFriday Fun: Holiday Themed Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • JPA - insert and retrieve clob and blob types

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes about the JPA feature for handling clob and blob data types.You will learn the following in this article. @Lob annotation Client code to insert and retrieve the clob/blob types End to End ADFaces application to retrieve the image from database table and display it in web page. Use Case Description Persisting and reading the image from database using JPA clob/blob type. @Lob annotation By default, TopLink JPA assumes that all persistent data can be represented as typical database data types. Use the @Lob annotation with a basic mapping to specify that a persistent property or field should be persisted as a large object to a database-supported large object type. A Lob may be either a binary or character type. TopLink JPA infers the Lob type from the type of the persistent field or property. For string and character-based types, the default is Clob. In all other cases, the default is Blob. Example Below code shows how to use this annotation to specify that persistent field picture should be persisted as a Blob. public class Person implements Serializable {    @Id    @Column(nullable = false, length = 20)    private String name;    @Column(nullable = false)    @Lob    private byte[] picture;    @Column(nullable = false, length = 20) } Client code to insert and retrieve the clob/blob types Reading a image file and inserting to Database table Below client code will read the image from a file and persist to Person table in database.                       Person p=new Person();                      p.setName("Tom");                      p.setSex("male");                      p.setPicture(writtingImage("Image location"));// - c:\images\test.jpg                       sessionEJB.persistPerson(p); //Retrieving the image from Database table and writing to a file                       List<Person> plist=sessionEJB.getPersonFindAll();//                      Person person=(Person)plist.get(0);//get a person object                      retrieveImage(person.getPicture());   //get picture retrieved from Table //Private method to create byte[] from image file  private static byte[] writtingImage(String fileLocation) {      System.out.println("file lication is"+fileLocation);     IOManager manager=new IOManager();        try {           return manager.getBytesFromFile(fileLocation);                    } catch (IOException e) {        }        return null;    } //Private method to read byte[] from database and write to a image file    private static void retrieveImage(byte[] b) {    IOManager manager=new IOManager();        try {            manager.putBytesInFile("c:\\webtest.jpg",b);        } catch (IOException e) {        }    } End to End ADFaces application to retrieve the image from database table and display it in web page. Please find the application in this link. Following are the j2ee components used in the sample application. ADFFaces(jspx page) HttpServlet Class - Will make a call to EJB and retrieve the person object from person table.Read the byte[] and write to response using Outputstream. SessionEJBBean - This is a session facade to make a local call to JPA entities JPA Entity(Person.java) - Person java class with setter and getter method annotated with @Lob representing the clob/blob types for picture field.

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Sort Your Emails by Conversation in Outlook 2010

    - by Matthew Guay
    Do you prefer the way Gmail sorts your emails by conversation?  Here’s how you can use this handy feature in Outlook 2010 too. One exciting new feature in Outlook 2010 is the ability to sort and link your emails by conversation.  This makes it easier to know what has been discussed in emails, and helps you keep your inbox more tidy.  Some users don’t like their emails linked into conversations, and in the final release of Outlook 2010 it is turned off by default.  Since this is a new feature, new users may overlook it and never know it’s available.  Here’s how you can enable conversation view and keep your email conversations accessible and streamlined. Activate Conversation View By default, your inbox in Outlook 2010 will look much like it always has in Outlook…a list of individual emails. To view your emails by conversation, select the View tab and check the Show as Conversations box on the top left. Alternately, click on the Arrange By tab above your emails, and select Show as Conversations. Outlook will ask if you want to activate conversation view in only this folder or all folders.  Choose All folders to view all emails in Outlook in conversations. Outlook will now resort your inbox, linking emails in the same conversation together.  Individual emails that don’t belong to a conversation will look the same as before, while conversations will have a white triangle carrot on the top left of the message title.  Select the message to read the latest email in the conversation. Or, click the triangle to see all of the messages in the conversation.  Now you can select and read any one of them. Most email programs and services include the previous email in the body of an email when you reply.  Outlook 2010 can recognize these previous messages as well.  You can navigate between older and newer messages from popup Next and Previous buttons that appear when you hover over the older email’s header.  This works both in the standard Outlook preview pane and when you open an email in its own window.   Edit Conversation View Settings Back in the Outlook View tab, you can tweak your conversation view to work the way you want.  You can choose to have Outlook Always Expand Conversations, Show Senders Above the Subject, and to Use Classic Indented View.  By default, Outlook will show messages from other folders in the conversation, which is generally helpful; however, if you don’t like this, you can uncheck it here.  All of these settings will stay the same across all of your Outlook accounts. If you choose Indented View, it will show the title on the top and then an indented message entry underneath showing the name of the sender. The Show Senders Above the Subject view makes it more obvious who the email is from and who else is active in the conversation.  This is especially useful if you usually only email certain people about certain topics, making the subject lines less relevant. Or, if you decide you don’t care for conversation view, you can turn it off by unchecking the box in the View tab as above. Conclusion Although it may take new users some time to get used to, conversation view can be very helpful in keeping your inbox organized and letting important emails stay together.  If you’re a Gmail user syncing your email account with Outlook, you may find this useful as it makes Outlook 2010 work more like Gmail, even when offline. If you’d like to sync your Gmail account with Outlook 2010, check out our articles on syncing it with POP3 and IMAP. Similar Articles Productive Geek Tips Automatically Move Daily Emails to Specific Folders in OutlookQuickly Clean Your Inbox in Outlook 2003/2007Find Emails With Attachments with Outlook 2007’s Instant SearchAdd Your Gmail Account to Outlook 2010 using POPSchedule Auto Send & Receive in Microsoft Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox)

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >