Search Results

Search found 21327 results on 854 pages for 'display resolution'.

Page 371/854 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • Download the ‘Getting Started with Ubuntu 12.10' Manual for Free

    - by Asian Angel
    Today is the official release date for Ubuntu’s latest version, so why not download the manual to go with it? This free manual is available to view online or download as a 145 page PDF file to best suits your needs. The home page for the manual will display a large Download Button, but the best option is to click on the Alternative Download Options link. Clicking on the Alternative Download Options link will let you select the language version you want, choose a system version, and let you download the manual directly or view it online. What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It?

    Read the article

  • Windows 7 intermittently drops wired internet/lan connection.

    - by CraigTP
    In a nutshell, my Windows 7 Ultimate PC intermittently drops it's internet connection. Why? Background: My PC is wired to my ADSL modem/router which is directly connected to the phone line. I also have wireless connectivity turned on within the router for a laptop to connect wirelessly. Every few hours or so, when using my PC, I find I cannot access the internet and pages will not load. Eventually, Windows7 will update the network icon in the task-tray to show the exclamation mark symbol on the network icon. Opening up the Network And Sharing Centre will show the red cross between the "Multiple Networks" and "The Internet". Here's a picture of the "Network And Sharing Centre" (grabbed when everything was working!) As you can see, I'm running Sun's VirtualBox on this machine and that creates a Network connection for itself. This doesn't seem to affect the intermittent dropping (i.e. the intermittent drops occur whether the VirtualBox connection is in use or not). When the connection does drop, I cannot access any internet pages, nor can I access the router's web admin page at http://192.168.1.1/, so I'm assuming I've lost all local LAN access too. It's definitely not the router (or the internet connection itself) as my laptop, using the wireless connection (and running Vista Home Premium) continues to be able to access the internet (and the router's web admin pages) just fine. Every time this happens, I can immediately restore all internet and LAN access by opening Network Adapter page, disabling the "Local Area Connection" and then re-enabling it. Give it a few seconds and everything is fine again. I assume this is because, beneath the GUI, it's effectively doing an "ipconfig /release" then "ipconfig /renew". Why does this happen in the first place, though? I've googled for this and seen quite a few other people (even on MSDN/Technet forums) experiencing the same or almost the same problem, but with no clear resolution. Suggestions of turning off IPv6 on the LAN adapter, and ensuring there's no power management "sleeping" the network adapter have been tried but do not cure the problem. There does not seem to be any particular sequence of events that cause it to happen either. I've had it go twice in 20 minutes when just randomly browsing the web with no other traffic, and I've also had it go once then not go again for 2-3 hours with the same sort of usage. Can anyone tell me why this is happening and how to make it stop? EDIT: Additional information based upon the answer provided so far: Firstly, I forgot the mention that this is Windows 7 64 bit if that makes any difference at all. I mentioned that I don't think the VirtualBox network adpater is causing this problem in any way, and I also have VirtualBox installed on two other machines, one running Vista Home Premium and the other running XP. Neither of these machine experience the same network connectivity issues as the Windows 7 machine. The IP assignment for the Windows 7 machine is the same both before and after the "drop". I have a DHCP server on the router issuing IP Addresses, however my Windows 7 machine uses a static address. Here's the output from "ipconfig": Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.2(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Within the system's event logs, the only event that relates to the connection dropping is a "DNS Client Event" and this is generated after the connection has dropped and is an event detailing that DNS information can't be found for whatever website I may be trying to access, just as the connection drops: Log Name: System Source: Microsoft-Windows-DNS-Client Event ID: 1014 Task Category: None Level: Warning Keywords: User: NETWORK SERVICE Description: Name resolution for the name weather.service.msn.com timed out after none of the configured DNS servers responded. The network adapter chipset is Realtek PCIe GBE Family Controller and I have confirmed that this is the correct chipset for the motherboard (Asus M4A77TD PRO), and in fact, Windows Update installed an updated driver for this on 12/Jan/2009. The details of the update say that it's a Realtek software update from December 2009. Incidentally, I was still having the same intermittent problems prior to this update. It seems to have made no difference at all. EDIT 2 (1 Feb 2010): In my quest to solve this problem, I have discovered some more interesting information. On another forum, someone suggested that I should try running Windows in "Safe Mode With Networking" and see if the problem continues to occur. This was a fantastic suggestion and I don't know why I didn't think of it sooner myself. So, I proceeded to run in Safe Mode with Networking for a number of hours, and amazingly, the "drops" didn't occur once. It was a positive discovery, however, due to the intermittent nature of the original problem, I wasn't completely convinced that the problem was cured. One thing I did note is that the fan on my GFX card was running alot louder than normal. This is due to the fact that I have an ASUS ENGTS250 graphics card (http://www.asus.com/product.aspx?P_ID=B6imcoax3MRY42f3) which had a known problem with a noisy fan until a BIOS update fixed the issue. (See the "Manufacturer Response" here: http://www.newegg.com/Product/Product.aspx?Item=N82E16814121334 for details). Well, running in safe mode had the fan running (incorrectly) at full speed (as it did before the BIOS update), but with an (apparently) stable network connection. Obviously some driver was not loaded for the GFX card when in Safe Mode so this got me thinking about the GFX card (since the very noisy fan was quite obvious when running in Safe Mode). I rebooted into normal mode, and found that Nvidia had a very up-to-date new driver for my GFX card (only about 1 week old), so I downloaded the appropriate driver and installed it. After installation and a reboot, I was able to use my PC for an entire day with NO NETWORK DROPS!!! This was on Saturday. However, on the Sunday, I also had my PC for pretty much the entire day and experienced 2 network drops. No other changes have been made to my PC in this time. So, the story seems to be that updating my graphics card drivers seems to have improved (if not completely fixed) the issue, however, I'm still searching for a proper fix for this problem. Hopefully, this information may help anyone who may have additional ideas as to why this problem is occuring in the first place. (And why does new GFX card drivers have anything to do with the network?) I appreciate everyone's feedback so far. However, I'll have to ask once more if anyone has any further ideas of how to fix this particular problem? Thanks in advance.

    Read the article

  • Microsoft Visual Studio Release History/Timelines/Milestones

    1975 – Bill Gates and Paul Allen write a version of Basic for Altair 8080 1982 – IBM releases BASCOM 1.0 (developed by Microsoft) 1983 – Microsoft Basic Compiler System v5.35 for MS-DOS release 1984 - Microsoft Basic Compiler System v5.36 release 1985 – Microsoft QuickBASIC 1.0 1986 – Microsoft QuickBASIC 1.01, 1.02, 2.00 1987 – Microsoft QuickBASIC 2.01, 3.00, 4.00 1987 – Microsoft BASIC 6.0 1988 – Microsoft QuickBASIC 4.00, 4.00b, 4.50 1989 – Microsoft BASIC Professional Development System 7.0 1990 - Microsoft BASIC Professional Development System 7.1 1991 – Microsoft Visual Basic released May 20-Windows World Convention –Atlanta 1992 – Microsoft Visual Basic 2.0 1993 – Microsoft Visual Basic 3.0 in Standard and Professional versions 1995 – Microsoft Visual Basic 4.0 released, supported the new Windows 95 1997 – Microsoft Visual Basic 5.0 – introduction of IntelliSense 1998 – Microsoft Visual Studio 6.0 that included Visual Basic 6.0 released (first VS) 2002 – Microsoft Visual Basic .NET 7.0 2002 – Visual Studio .NET 2003 – Microsoft Visual Basic .NET 7.1 2003 – Microsoft Visual Studio w/Intellisense 2003 – Visual Studio .NET 2004 – Announce Visual Studios 2005 – Code name Whidbey 2005 – Visual Studio 2005 release w/Extensibility 2005 – Visual Studio Express released 2006 - Expression Tool Set released - devs and designers work together 2006 – Visual Studio Team release – November 30th 2007 – Visual Studio 2008 (code name Orcas) ships November = Video Studio Shell 2010 - Visual Studios (code name Rosario) span.fullpost {display:none;}

    Read the article

  • yum update works but yum --security update fails to work in Fedora 12

    - by bobo
    I had already installed the yum-security before. And I was going to do an update by entering the following command: [root@localhost /]# yum update Loaded plugins: presto, priorities, refresh-packagekit, security Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package eject.i686 0:2.1.5-17.fc12 set to be updated ---> Package glibc.i686 0:2.11.1-4 set to be updated ---> Package glibc-common.i686 0:2.11.1-4 set to be updated ---> Package glibc-devel.i686 0:2.11.1-4 set to be updated ---> Package glibc-headers.i686 0:2.11.1-4 set to be updated ---> Package gnome-themes.noarch 0:2.28.1-3.fc12 set to be updated ---> Package gtk2.i686 0:2.18.9-3.fc12 set to be updated ---> Package gtk2-immodule-xim.i686 0:2.18.9-3.fc12 set to be updated ---> Package kernel-PAE.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAE-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAEdebug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-debug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-firmware.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package kernel-headers.i686 0:2.6.32.11-99.fc12 set to be updated ---> Package libnetfilter_conntrack.i686 0:0.0.101-1.fc12 set to be updated ---> Package media-player-info.noarch 0:5-1.fc12 set to be updated ---> Package nscd.i686 0:2.11.1-4 set to be updated ---> Package perf.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package rhythmbox.i686 0:0.12.6-5.fc12 set to be updated ---> Package sysvinit-tools.i686 0:2.87-3.dsf.fc12 set to be updated --> Finished Dependency Resolution --> Running transaction check ---> Package kernel-PAE.i686 0:2.6.31.12-174.2.3.fc12 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel-PAE i686 2.6.32.11-99.fc12 updates 20 M kernel-PAE-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-PAEdebug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-debug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-devel i686 2.6.32.11-99.fc12 updates 6.1 M Updating: eject i686 2.1.5-17.fc12 updates 49 k glibc i686 2.11.1-4 updates 4.2 M glibc-common i686 2.11.1-4 updates 14 M glibc-devel i686 2.11.1-4 updates 953 k glibc-headers i686 2.11.1-4 updates 590 k gnome-themes noarch 2.28.1-3.fc12 updates 1.5 M gtk2 i686 2.18.9-3.fc12 updates 3.2 M gtk2-immodule-xim i686 2.18.9-3.fc12 updates 60 k kernel-firmware noarch 2.6.32.11-99.fc12 updates 968 k kernel-headers i686 2.6.32.11-99.fc12 updates 749 k libnetfilter_conntrack i686 0.0.101-1.fc12 updates 37 k media-player-info noarch 5-1.fc12 updates 32 k nscd i686 2.11.1-4 updates 189 k perf noarch 2.6.32.11-99.fc12 updates 79 k rhythmbox i686 0.12.6-5.fc12 updates 4.0 M sysvinit-tools i686 2.87-3.dsf.fc12 updates 58 k Removing: kernel-PAE i686 2.6.31.12-174.2.3.fc12 @updates 72 M Transaction Summary ================================================================================ Install 5 Package(s) Upgrade 16 Package(s) Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Total download size: 75 M Is this ok [y/N]: But then I changed my mind, I decided to do a security-only update instead of a full update, so I entered the following command: [root@localhost /]# yum --security update Loaded plugins: presto, priorities, refresh-packagekit, security Setting up Update Process Resolving Dependencies Limiting packages to security relevant ones http://download.fedoraproject.org/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. ftp://ftp.chu.edu.tw/linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.kddilabs.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://www.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.twaren.net/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://c147.twaren.net/pub/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. Error: failure: repodata/updateinfo.xml.gz from updates: [Errno 256] No more mirrors to try. You could try using --skip-broken to work around the problem ^C[root@localhost /]# As it can be seen in the output, when I run the yum --security update command, it did show the Limiting packages to security relevant ones message so it's aware of the option. But I don't know why it keeps reporting the http error 416. I searched in google and found the following description of the error but it doesn't seem to help much. HTTP ERROR 416 - Requested Range Not Satisfiable A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long. It suggests me to use the --skip-broken option, I tried and the output is the same. I already tested many times, it just doesn't work when the --security option is used. What could be the possible cause for this problem?

    Read the article

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005). Any ideas how to restore a database in this situation, which gives this error when attempting to restore? PARTIAL RESOLUTION: When I restored to a different box, running SQL 2008 Express on Windows Server 2003, all worked well. It just wouldn't work on the Windows 7 box. Not sure why. If anyone else has a similar experience, please let me know (there are many similar issues in different forums out there).

    Read the article

  • Set the Minimum and Maximum Tab Widths in Firefox without an Add-on

    - by Lori Kaufman
    If you tend to have a lot of tabs open in Firefox, there may be times when you can’t see all the tabs you have open, and you need to navigate among your tabs using the tab scrolling arrows. There are add-ons available for Firefox that will make multiple rows of tabs, such as Tab Utilities. However, this still may not be ideal, as it takes a lot of screen real estate when you have a lot of tabs open. There’s an easy way to set the width of the tabs, so they still display text or website icons, and, at the same time, allow more tabs to be visible. To change the width of the tabs, enter “about:config” in the address bar in Firefox and press Enter. HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Why does Mass Effect 1 run so slow on my machine if I have an XFX NVidia 9400GT video card? [closed]

    - by Papuccino1
    I so sick and tired of having my components pass the minimum requirements of a game and then I get 15 FPS on the game on everything low. Should't PC developers say 'use at least this video card for a smooth 30 FPS'? Here are my specs: Windows 7 2GB DDR2 RAM XFX Nvidia 9400gt Intel Pentium D Dual Core 2.8ghz I should be at LEAST getting 30 FPS on everything low right? Please tell me what I can do to make games run as they should, or is my video card not good for these games? Here are the recommended requirements from the official site: Recommended System Requirements for Mass Effect on the PC Operating System: Windows XP or Vista Processor: 2.6+GHZ Intel or 2.4+GHZ AMD Memory: 2 Gigabyte Ram Video Card: NVIDIA GeForce 7900 GTX or higher. ATI X1800 XL series or higher Hard Drive Space: 12 Gigabytes Sound Card: DirectX 9.0c compatible sound card and drivers – 5.1 sound card recommended My videocard is 9400GT, how is that worse than a 7900GTX? :S Edit 2: I should note, that I get poor frames when running the game in absolute BOTTOM specs. lowest resolution, no particles, etc. etc. Absolute ZERO and getting poor framerates.

    Read the article

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • InnoDB: Error: log file ./ib_logfile0 is of different size

    - by jack
    I just added the following lines in /etc/mysql/my.cnf after I converted one database to use InnoDB engine. innodb_buffer_pool_size = 2560M innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 16 innodb_flush_method = O_DIRECT But it raise "ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query" error restarting mysqld. And mysql error log shows the following InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 100118 20:52:52 [ERROR] Plugin 'InnoDB' init function returned error. 100118 20:52:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100118 20:52:52 [ERROR] Unknown/unsupported table type: InnoDB 100118 20:52:52 [ERROR] Aborting So I commented out this line # innodb_log_file_size = 256M And it restarted mysql successfully. I wonder what's the "5242880 bytes of log file" showed in mysql error? It's the first database on InnoDB engine on this server so when and where is that log file created? In this case, how can I enable innodb_log_file_size directive in my.cnf? EDIT I tried to delete /var/lib/mysql/ib_logfile0 and restart mysqld but it still failed. It now shows the following in error log. 100118 21:27:11 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 InnoDB: Error: log file ./ib_logfile1 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! Resolution It works now after deleted both ib_logfile0 and ib_logfile1 in /var/lib/mysql

    Read the article

  • Convert Video and Remove Commercials in Windows 7 Media Center with MCEBuddy 1.1

    - by DigitalGeekery
    Today look at MCEBuddy for Windows 7 Media Center. This handy app automatically takes your recorded TV files and converts them to MP4, AVI, WMV, or MPEG format. It even has the option to cut out those annoying commercials during the conversion process. Installation and Configuration Download and extract MCE Buddy. (Download link below) Run the setup.exe file and take all the default settings.   Open MCEBuddy Configuration by going to Start > All Programs > MCEBuddy > MCEBuddy Configuration.   Video Paths The MCEBuddy application is comprised of a single window. The first step you’ll want to take is to define your Source and Destination paths. The “Source” will most likely be your Recorded TV directory. The Destination should NOT be the same as the Source folder. Note: The Recorded TV directory in Windows 7 Media Center will only display and play WTV & DVR-MS files. To watch the converted MP4, AVI, WMV, or MPEG files in Windows Media Center you’ll need to add them to your Video Library or Movie Library. Video Conversion Next, choose your preferred format for conversion from the “Convert to” drop down list. The default is MP4 with the H.264 codec. You’ll find a wide variety of formats. The first set of conversion options in the drop down list will resize the video to 720 pixels wide. The next two sections maintain the original size, and the final section is for a variety of portable devices.   Next, you’ll see a group of check boxes below the “Convert to” drop down list. The Commercial Skipping option will cut the commercials while converting the file. Sort By Series will create a sub-folder in your Destination folder for each TV show. Delete Original will delete the WTV file after conversion is complete. (This option is not recommended unless you are sure your files are converting properly and you no longer need the WTV file.) Start Minimized is ideal if you want to run MCEBuddy on Windows startup. Note: MCEBuddy installs and uses Comskip for commercial cutting by default. However, if you have ShowAnalyzer installed, it will use that application instead. Advanced Options To choose a specific time of day to perform the conversions, click the checkbox under the “Advanced Options,” and select the starting and ending times for conversion. For example, convert between 2 hours and 5 hours would be between 2 am and 5am. If you want MCEBuddy to constantly look for and immediately convert new recordings, leave the box unchecked.   The “Video age” option lets you choose a specific number of days to wait before performing the conversion. This can be useful if you want to watch the recordings first and delete those you don’t wish to convert. You can also choose the “Sub Directories” if you’d like MCEBuddy to convert files that are in a sub-folder in your “Source” directory. Second Conversion As you might expect, this option allows MCEBuddy to perform a second conversion of your file. This can be useful if you want to use your first conversion to create a higher quality MP4 or AVI file for playback on a larger screen, and a second one for a portable device such as Zune or iPhone. The same options from the first conversion are also available for the second. You’ll want to choose a separate Destination folder for the second conversion.   Start and Monitor Progress To start converting your video files, simply press the “Start” button at the bottom. You’ll be able to follow the progress in the “Current Activity” section. When all the video files have finished converting, or there are no current files to convert, MCEBuddy will display a “Started – Idle” status. Click “Stop” if you don’t want MCEBuddy to continue scanning for new files.   Conclusion MCEBuddy 1.1 will convert all WTV files in it’s source folder. If you want to pick and choose which recordings to convert, you may want to define a source folder different than the Recorded TV folder and then just copy or move the files you wish to convert into the new source folder. The conversion process does take a good bit of time. If you choose the commercial skipping and second conversion options it can take several hours to fully convert one TV recording. Overall, MCEBuddy makes a nice Media Center addition for those that want to save some space with smaller size files, convert Recorded TV files for their portable device, or automatically remove commercials. If you’re looking for a different method to skip commercials check out our post on how to skip commercials in Windows 7 Media Center. Download MCEBuddy 1.1 Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How To Skip Commercials in Windows 7 Media CenterHow To Convert Video Files to MP3 with VLCStartup Customizations for Media Center in Windows 7Add Folders to the Movie Library in Windows 7 Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam

    Read the article

  • 8 Things You Can Do In Android’s Developer Options

    - by Chris Hoffman
    The Developer Options menu in Android is a hidden menu with a variety of advanced options. These options are intended for developers, but many of them will be interesting to geeks. You’ll have to perform a secret handshake to enable the Developer Options menu in the Settings screen, as it’s hidden from Android users by default. Follow the simple steps to quickly enable Developer Options. Enable USB Debugging “USB debugging” sounds like an option only an Android developer would need, but it’s probably the most widely used hidden option in Android. USB debugging allows applications on your computer to interface with your Android phone over the USB connection. This is required for a variety of advanced tricks, including rooting an Android phone, unlocking it, installing a custom ROM, or even using a desktop program that captures screenshots of your Android device’s screen. You can also use ADB commands to push and pull files between your device and your computer or create and restore complete local backups of your Android device without rooting. USB debugging can be a security concern, as it gives computers you plug your device into access to your phone. You could plug your device into a malicious USB charging port, which would try to compromise you. That’s why Android forces you to agree to a prompt every time you plug your device into a new computer with USB debugging enabled. Set a Desktop Backup Password If you use the above ADB trick to create local backups of your Android device over USB, you can protect them with a password with the Set a desktop backup password option here. This password encrypts your backups to secure them, so you won’t be able to access them if you forget the password. Disable or Speed Up Animations When you move between apps and screens in Android, you’re spending some of that time looking at animations and waiting for them to go away. You can disable these animations entirely by changing the Window animation scale, Transition animation scale, and Animator duration scale options here. If you like animations but just wish they were faster, you can speed them up. On a fast phone or tablet, this can make switching between apps nearly instant. If you thought your Android phone was speedy before, just try disabling animations and you’ll be surprised how much faster it can seem. Force-Enable FXAA For OpenGL Games If you have a high-end phone or tablet with great graphics performance and you play 3D games on it, there’s a way to make those games look even better. Just go to the Developer Options screen and enable the Force 4x MSAA option. This will force Android to use 4x multisample anti-aliasing in OpenGL ES 2.0 games and other apps. This requires more graphics power and will probably drain your battery a bit faster, but it will improve image quality in some games. This is a bit like force-enabling antialiasing using the NVIDIA Control Panel on a Windows gaming PC. See How Bad Task Killers Are We’ve written before about how task killers are worse than useless on Android. If you use a task killer, you’re just slowing down your system by throwing out cached data and forcing Android to load apps from system storage whenever you open them again. Don’t believe us? Enable the Don’t keep activities option on the Developer options screen and Android will force-close every app you use as soon as you exit it. Enable this app and use your phone normally for a few minutes — you’ll see just how harmful throwing out all that cached data is and how much it will slow down your phone. Don’t actually use this option unless you want to see how bad it is! It will make your phone perform much more slowly — there’s a reason Google has hidden these options away from average users who might accidentally change them. Fake Your GPS Location The Allow mock locations option allows you to set fake GPS locations, tricking Android into thinking you’re at a location where you actually aren’t. Use this option along with an app like Fake GPS location and you can trick your Android device and the apps running on it into thinking you’re at locations where you actually aren’t. How would this be useful? Well, you could fake a GPS check-in at a location without actually going there or confuse your friends in a location-tracking app by seemingly teleporting around the world. Stay Awake While Charging You can use Android’s Daydream Mode to display certain apps while charging your device. If you want to force Android to display a standard Android app that hasn’t been designed for Daydream Mode, you can enable the Stay awake option here. Android will keep your device’s screen on while charging and won’t turn it off. It’s like Daydream Mode, but can support any app and allows users to interact with them. Show Always-On-Top CPU Usage You can view CPU usage data by toggling the Show CPU usage option to On. This information will appear on top of whatever app you’re using. If you’re a Linux user, the three numbers on top probably look familiar — they represent the system load average. From left to right, the numbers represent your system load over the last one, five, and fifteen minutes. This isn’t the kind of thing you’d want enabled most of the time, but it can save you from having to install third-party floating CPU apps if you want to see CPU usage information for some reason. Most of the other options here will only be useful to developers debugging their Android apps. You shouldn’t start changing options you don’t understand. If you want to undo any of these changes, you can quickly erase all your custom options by sliding the switch at the top of the screen to Off.     

    Read the article

  • fedora12, yum not releasing "lock" after performing an action

    - by James.Elsey
    Hello, This problem has been occurring quite frequently recently and I can't seem to find a way of preventing it. Whenever I perform an action with yum such as to install or remove software, it appears to execute successfully but then I'm unable to move onto the next yum command For example, I executed yum remove skype, it appeared to remove ok, but next when I try to yum search skype it appears that yum is still processing, and I have to manually kill that process via kill 1234 (or whatever the PID is) My output is as follows [root@nevada james]# yum remove skype Loaded plugins: presto, refresh-packagekit Setting up Remove Process Resolving Dependencies --> Running transaction check ---> Package skype.i586 0:2.1.0.47-fc10 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Removing: skype i586 2.1.0.47-fc10 installed 24 M Transaction Summary ================================================================================ Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Erasing : skype-2.1.0.47-fc10.i586 1/1 Removed: skype.i586 0:2.1.0.47-fc10 Complete! [root@nevada james]# yum search skype Loaded plugins: presto, refresh-packagekit Existing lock /var/run/yum.pid: another copy is running as pid 3639. Another app is currently holding the yum lock; waiting for it to exit... The other application is: PackageKit Memory : 79 M RSS (372 MB VSZ) Started: Fri Dec 18 08:39:18 2009 - 00:01 ago State : Sleeping, pid: 3639 Kernel version : 2.6.31.6-166.fc12.x86_64 Any ideas how I can prevent this behaviour? Thanks

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • working in external actionscript file does not show anything on the screen?

    - by XNA
    I'm writing this code in Flash builder and I tested the file in flash, but nothing appears in the swf file. (no text in the screen show , i don't know why) Is there any missing property in the code? Also, when I create text or movie clip with flash tools on the stage and give it an instance name, flash builder doesn't seem to recognize it in the action script code. package { import flash.display.MovieClip; import flash.text.TextField; public class mark extends MovieClip { public function mark() { super(); public var d:TextField=new TextField(); d.text="Hello world"; d.x=250; d.y=300; addChild(d); } }

    Read the article

  • can't start vino VNC service on Ubuntu 12.04

    - by user1689961
    I just installed vino, but when I run it, I get the following error. # ./start_vnc (vino-server:2502): EggSMClient-CRITICAL **: egg_sm_client_set_mode: assertion `global_client == NULL || global_client_mode == EGG_SM_CLIENT_MODE_DISABLED' failed ** Message: The desktop sharing service is already running, exiting. MORE DETAILS: UltraVNC client running on a Windows computer can login and shows the Ubuntu desktop, and controls the Ubuntu mouse to do things, but the VNC client view at the Windows computer does NOT show any changes to the display at the Ubuntu desktop, only the original desktop view at the time of VNC client login. UPDATE I solved it by following the askubuntu post: "VNC session very slow in 12.04 compared to older versions", which said to do this: gsettings set org.gnome.Vino disable-xdamage true ..and it worked. But should I be concerned about the error messages?

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Is trailing slash automagically added on click of home page URL in browser?

    - by Question Overflow
    I am asking this because whenever I mouseover a link to a home page (e.g. http://www.example.com), I notice that a trailing slash is always added (as observed on the status bar of the browser) whether the home page link contains a href attribute that ends with a slash or not. But whenever I am on the home page, the URL on display will not have a trailing slash. I tried entering a slash to the URL in the URL bar. And with Firebug enabled, I notice that the site always return a 200 OK status. An article here discussing this states that having a slash at the end will avoid a 301 redirection. But I am not seeing any redirection, even on this page. Could this be a browser feature that is appending the slash?

    Read the article

  • How to customise search core results web part Part1

    - by ybbest
    In this post, I’d like to show you how to customise search core results web part. It is a quite simple, most of the times what you need to do is to change the xslt to perform the changes. Here are the steps: 1. You need to change the xslt to the following, so that you can see the raw xml. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" /> <xsl:template match="/"> <xmp><xsl:copy-of select="*"/></xmp> </xsl:template> </xsl:stylesheet> a. To do so , you need to go to edit page>>Edit search core results web part >>Display Properties and then untick use Location Visualization b. Open the xslt editor and copy the existing XSLT code to your preferred xslt editor so that you can customise it. c. Now you can paste in the XSLT code above. 2.Perform the search after you have completed step1 and you will see the search results returned in raw xml <All_Results> <Result> <id>1</id> <workid>678</workid> <rank>100000000</rank> <title>Ybbest</title> <author></author> <size>137531</size> <url>http://ybbest</url> <urlEncoded>http%3A%2F%2Fybbest</urlEncoded> <description>Ybbest test site</description> <write>3/17/2012</write> <sitename>http://ybbest</sitename> <collapsingstatus>0</collapsingstatus> <hithighlightedsummary> <c0>Ybbest</c0> test site <ddd /> Add a new image, change this welcome text or add new lists to this page by clicking the edit button above. You can click on Shared Documents to add files or on the <ddd /> </hithighlightedsummary> <hithighlightedproperties> <HHTitle> <c0>Ybbest</c0> </HHTitle> <HHUrl>http://<c0>ybbest</c0></HHUrl> </hithighlightedproperties> <contentclass>STS_Site</contentclass> <isdocument>False</isdocument> <picturethumbnailurl></picturethumbnailurl> <popularsocialtags /> <picturewidth>0</picturewidth> <pictureheight>0</pictureheight> <datepicturetaken></datepicturetaken> <serverredirectedurl></serverredirectedurl> <fileextension></fileextension> <ows_metadatafacetinfo></ows_metadatafacetinfo> <imageurl imageurldescription="SharePoint Site Collection">/_layouts/images/siteicon_16x16.png</imageurl> </Result> <TotalResults>69</TotalResults> <NumberOfResults>50</NumberOfResults> </All_Results> 3. Then you can read what has been returned in the raw xml and start modifying the xslt to customise your search results page. 4.You can also link an external xslt to the web part.It can be set in the Miscellaneous of Web Part section. You can also set it pragmatically using a feature receiver , you can download the source code to do so here. References: http://stackoverflow.com/questions/6548104/change-xslt-of-the-searchresultwebpart-during-the-featureactivated http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/04/05/a-quick-guide-to-coreresultswebpart-configuration-changes-in-sharepoint-2010.aspx http://www.tonytestasworld.com/post/2011/01/30/HowTo-display-SharePoint-Search-results-as-raw-XML.aspx

    Read the article

  • How to Always Load Internet Explorer 9 in Full Screen Mode

    - by Lori Kaufman
    Internet Explorer 9 has a minimal interface by default, with the tab bar and the toolbar and address bar on the same line. However, you can gain even more viewable space by pressing F11 to go to full screen mode. If you like full screen mode and want to use it most of the time, you can have Internet Explorer open in that mode automatically, by editing a setting in the registry. To begin, enter “regedit” (without the quotes) in the Search box on the Start menu. When the results display, click regedit.exe or press Enter when it’s highlighted. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Group policy waited for the network subsystem

    - by the-wabbit
    In an AD domain with Windows Server 2008 R2 DCs users are complaining about delays in the bootup process of the clients. The group policy log reveals that the client is waiting ~ 20-50 seconds for "the network subsystem": Event 5322, GroupPolicy Group policy waited for 29687 milliseconds for the network subsystem at computer boot. This appears to be domain-specific as machines joining a different domain from the same network do not experience any delays and Event 5322 reports <1000 ms wait times at startup. It happens on virtual and physical machines alike, so it does not look like a hardware- or driver-related issue. Further investigation has shown that the client is taking its time before issuing DHCP requests. In the network traces, I can see IPv6 router solicitations and multicast DNS name registrations as soon as the network driver is loaded and the network connection is reported "up" in the event log (e1cexpress/36). Yet, the DHCPv4 client service seems to take another 15-50 seconds to start (Dhcp-Client/50036), so the IPv4 address remains unconfigured for a while. The DHCP client's messages in the event log are succeeding the service start of the "Sophos Anti-Virus" service (Sophos AV 10.3 package), which I suspect to be the culprit - the DHCP client service dependencies include the TDI Support driver which might be what Sophos is using to intercept network traffic: Network Location Awareness seems to break at startup as a side-effect, I see that off-site DCs are contacted due to what seems like a race condition between the GP client and the DHCP client / NLA service startup. I could set the Group Policy Client service to depend on NLA, yet this still would not eliminate the delay. Also, I am not all that sure that this is a good idea. Is there a known resolution which would eliminate the startup delay?

    Read the article

  • How to use integrated Intel graphics instead of Nvidia graphics on MacBook Pro?

    - by Benjamin Geese
    I am running Ubuntu 12.04 64bit on MacBook Pro 15" 2010 (MacBookPro6,2) and I would like to use the integrated Intel graphics instead of the dedicated Nvidia graphics Ubuntu boots with on this machine. I am booting with UEFI, not REFIT or similar. I managed to switch to UEFI with the help of this page. This wiki page also contains tips on switching to Intel graphics which include some (for me) cryptic boot commands to grub. However if I follow the guide, my display stays just black. Currently, I am only looking for solution to use Intel graphics for Ubuntu to save power and keep my MacBook cool. Dynamic switching or stuff like that is not required.

    Read the article

  • Yum Update Failing mod_ssl and glibc_devel

    - by Kerry
    Any ideas on how to get this to not fail? # yum update Freeing read locks for locker 0x82: 4189/140342084876032 Freeing read locks for locker 0x84: 4189/140342084876032 Freeing read locks for locker 0x85: 4189/140342084876032 Freeing read locks for locker 0x86: 4189/140342084876032 Freeing read locks for locker 0x87: 4189/140342084876032 Freeing read locks for locker 0x9a: 4189/140342084876032 Freeing read locks for locker 0x9c: 4189/140342084876032 Freeing read locks for locker 0x9d: 4189/140342084876032 Freeing read locks for locker 0x9e: 4189/140342084876032 Freeing read locks for locker 0x9f: 4189/140342084876032 Freeing read locks for locker 0xa0: 4189/140342084876032 Freeing read locks for locker 0xa1: 4189/140342084876032 Freeing read locks for locker 0xa2: 4189/140342084876032 Freeing read locks for locker 0xa3: 4189/140342084876032 Freeing read locks for locker 0xa4: 4189/140342084876032 Freeing read locks for locker 0xa5: 4189/140342084876032 Freeing read locks for locker 0xa6: 4189/140342084876032 Freeing read locks for locker 0xa7: 4189/140342084876032 Freeing read locks for locker 0xa8: 4189/140342084876032 Freeing read locks for locker 0xa9: 4189/140342084876032 Freeing read locks for locker 0xaa: 4189/140342084876032 Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.hmc.edu * epel: mirrors.kernel.org * extras: centos.mirror.freedomvoice.com * updates: mirrors.sonic.net Setting up Update Process Resolving Dependencies There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them. The program yum-complete-transaction is found in the yum-utils package. --> Running transaction check ---> Package device-mapper-persistent-data.x86_64 0:0.2.8-2.el6 will be updated ---> Package device-mapper-persistent-data.x86_64 0:0.2.8-4.el6_5 will be an update ---> Package glibc-headers.x86_64 0:2.12-1.132.el6 will be updated --> Processing Dependency: glibc-headers = 2.12-1.132.el6 for package: glibc-devel-2.12-1.132.el6.x86_64 ---> Package glibc-headers.x86_64 0:2.12-1.132.el6_5.2 will be an update ---> Package httpd.x86_64 0:2.2.15-29.el6.centos will be updated --> Processing Dependency: httpd = 2.2.15-29.el6.centos for package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 ---> Package httpd.x86_64 0:2.2.15-30.el6.centos will be an update ---> Package kernel.x86_64 0:2.6.32-431.17.1.el6 will be installed ---> Package kernel-devel.x86_64 0:2.6.32-431.17.1.el6 will be installed ---> Package selinux-policy-targeted.noarch 0:3.7.19-231.el6_5.1 will be updated ---> Package selinux-policy-targeted.noarch 0:3.7.19-231.el6_5.3 will be an update --> Finished Dependency Resolution Error: Package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 (@base) Requires: httpd = 2.2.15-29.el6.centos Removing: httpd-2.2.15-29.el6.centos.x86_64 (@base) httpd = 2.2.15-29.el6.centos Updated By: httpd-2.2.15-30.el6.centos.x86_64 (updates) httpd = 2.2.15-30.el6.centos Error: Package: glibc-devel-2.12-1.132.el6.x86_64 (@base) Requires: glibc-headers = 2.12-1.132.el6 Removing: glibc-headers-2.12-1.132.el6.x86_64 (@base) glibc-headers = 2.12-1.132.el6 Updated By: glibc-headers-2.12-1.132.el6_5.2.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.2 Available: glibc-headers-2.12-1.132.el6_5.1.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.1 You could try using --skip-broken to work around the problem ** Found 34 pre-existing rpmdb problem(s), 'yum check' output follows: audit-2.2-4.el6_5.x86_64 is a duplicate with audit-2.2-2.el6.x86_64 audit-libs-2.2-4.el6_5.x86_64 is a duplicate with audit-libs-2.2-2.el6.x86_64 curl-7.19.7-37.el6_5.3.x86_64 is a duplicate with curl-7.19.7-37.el6_4.x86_64 device-mapper-multipath-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-0.4.9-72.el6_5.1.x86_64 device-mapper-multipath-libs-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-libs-0.4.9-72.el6_5.1.x86_64 2:ethtool-3.5-1.4.el6_5.x86_64 is a duplicate with 2:ethtool-3.5-1.2.el6_5.x86_64 glibc-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-2.12-1.132.el6.x86_64 glibc-common-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-devel-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 has missing requires of glibc-headers = ('0', '2.12', '1.132.el6_5.2') gnutls-2.8.5-14.el6_5.x86_64 is a duplicate with gnutls-2.8.5-13.el6_5.x86_64 httpd-2.2.15-29.el6.centos.x86_64 has missing requires of httpd-tools = ('0', '2.2.15', '29.el6.centos') httpd-manual-2.2.15-30.el6.centos.noarch has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') iproute-2.6.32-32.el6_5.x86_64 is a duplicate with iproute-2.6.32-31.el6.x86_64 kernel-firmware-2.6.32-431.17.1.el6.noarch is a duplicate with kernel-firmware-2.6.32-431.11.2.el6.noarch kernel-headers-2.6.32-431.17.1.el6.x86_64 is a duplicate with kernel-headers-2.6.32-431.11.2.el6.x86_64 kpartx-0.4.9-72.el6_5.2.x86_64 is a duplicate with kpartx-0.4.9-72.el6_5.1.x86_64 krb5-libs-1.10.3-15.el6_5.1.x86_64 is a duplicate with krb5-libs-1.10.3-10.el6_4.6.x86_64 libblkid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libblkid-2.17.2-12.14.el6.x86_64 libcurl-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-7.19.7-37.el6_4.x86_64 libcurl-devel-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-devel-7.19.7-37.el6_4.x86_64 libtasn1-2.3-6.el6_5.x86_64 is a duplicate with libtasn1-2.3-3.el6_2.1.x86_64 libuuid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libuuid-2.17.2-12.14.el6.x86_64 libxml2-2.7.6-14.el6_5.1.x86_64 is a duplicate with libxml2-2.7.6-14.el6.x86_64 mdadm-3.2.6-7.el6_5.2.x86_64 is a duplicate with mdadm-3.2.6-7.el6.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 is a duplicate with 1:mod_ssl-2.2.15-29.el6.centos.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') nss-softokn-3.14.3-10.el6_5.x86_64 is a duplicate with nss-softokn-3.14.3-9.el6.x86_64 openssl-1.0.1e-16.el6_5.7.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.4.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.7.x86_64 openssl-devel-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-devel-1.0.1e-16.el6_5.7.x86_64 selinux-policy-3.7.19-231.el6_5.3.noarch is a duplicate with selinux-policy-3.7.19-231.el6_5.1.noarch tzdata-2014d-1.el6.noarch is a duplicate with tzdata-2014b-1.el6.noarch util-linux-ng-2.17.2-12.14.el6_5.x86_64 is a duplicate with util-linux-ng-2.17.2-12.14.el6.x86_64 UPDATE I installed and ran yum-complete-transaction as requested, it finished some things and suggested I run package-cleanup --problems, which yielded this: package-cleanup --problems Loaded plugins: fastestmirror Package httpd-manual-2.2.15-30.el6.centos.noarch requires httpd = ('0', '2.2.15', '30.el6.centos') Package httpd-2.2.15-29.el6.centos.x86_64 requires httpd-tools = ('0', '2.2.15', '29.el6.centos') Package mod_ssl-2.2.15-30.el6.centos.x86_64 requires httpd = ('0', '2.2.15', '30.el6.centos') Package glibc-devel-2.12-1.132.el6_5.2.x86_64 requires glibc-headers = ('0', '2.12', '1.132.el6_5.2') I'm definitely not a sys-admin, what would be the next step? UPDATE 2 I ran yum distro-sync: # yum distro-sync Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.hmc.edu * epel: mirrors.kernel.org * extras: centos.mirror.freedomvoice.com * updates: mirrors.sonic.net Setting up Distribution Synchronization Process Resolving Dependencies --> Running transaction check ---> Package glibc-headers.x86_64 0:2.12-1.132.el6 will be updated --> Processing Dependency: glibc-headers = 2.12-1.132.el6 for package: glibc-devel-2.12-1.132.el6.x86_64 ---> Package glibc-headers.x86_64 0:2.12-1.132.el6_5.2 will be an update ---> Package httpd.x86_64 0:2.2.15-29.el6.centos will be updated --> Processing Dependency: httpd = 2.2.15-29.el6.centos for package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 ---> Package httpd.x86_64 0:2.2.15-30.el6.centos will be an update --> Finished Dependency Resolution Error: Package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 (@base) Requires: httpd = 2.2.15-29.el6.centos Removing: httpd-2.2.15-29.el6.centos.x86_64 (@base) httpd = 2.2.15-29.el6.centos Updated By: httpd-2.2.15-30.el6.centos.x86_64 (updates) httpd = 2.2.15-30.el6.centos Error: Package: glibc-devel-2.12-1.132.el6.x86_64 (@base) Requires: glibc-headers = 2.12-1.132.el6 Removing: glibc-headers-2.12-1.132.el6.x86_64 (@base) glibc-headers = 2.12-1.132.el6 Updated By: glibc-headers-2.12-1.132.el6_5.2.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.2 Available: glibc-headers-2.12-1.132.el6_5.1.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.1 You could try using --skip-broken to work around the problem ** Found 34 pre-existing rpmdb problem(s), 'yum check' output follows: audit-2.2-4.el6_5.x86_64 is a duplicate with audit-2.2-2.el6.x86_64 audit-libs-2.2-4.el6_5.x86_64 is a duplicate with audit-libs-2.2-2.el6.x86_64 curl-7.19.7-37.el6_5.3.x86_64 is a duplicate with curl-7.19.7-37.el6_4.x86_64 device-mapper-multipath-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-0.4.9-72.el6_5.1.x86_64 device-mapper-multipath-libs-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-libs-0.4.9-72.el6_5.1.x86_64 2:ethtool-3.5-1.4.el6_5.x86_64 is a duplicate with 2:ethtool-3.5-1.2.el6_5.x86_64 glibc-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-2.12-1.132.el6.x86_64 glibc-common-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-devel-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 has missing requires of glibc-headers = ('0', '2.12', '1.132.el6_5.2') gnutls-2.8.5-14.el6_5.x86_64 is a duplicate with gnutls-2.8.5-13.el6_5.x86_64 httpd-2.2.15-29.el6.centos.x86_64 has missing requires of httpd-tools = ('0', '2.2.15', '29.el6.centos') httpd-manual-2.2.15-30.el6.centos.noarch has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') iproute-2.6.32-32.el6_5.x86_64 is a duplicate with iproute-2.6.32-31.el6.x86_64 kernel-firmware-2.6.32-431.17.1.el6.noarch is a duplicate with kernel-firmware-2.6.32-431.11.2.el6.noarch kernel-headers-2.6.32-431.17.1.el6.x86_64 is a duplicate with kernel-headers-2.6.32-431.11.2.el6.x86_64 kpartx-0.4.9-72.el6_5.2.x86_64 is a duplicate with kpartx-0.4.9-72.el6_5.1.x86_64 krb5-libs-1.10.3-15.el6_5.1.x86_64 is a duplicate with krb5-libs-1.10.3-10.el6_4.6.x86_64 libblkid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libblkid-2.17.2-12.14.el6.x86_64 libcurl-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-7.19.7-37.el6_4.x86_64 libcurl-devel-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-devel-7.19.7-37.el6_4.x86_64 libtasn1-2.3-6.el6_5.x86_64 is a duplicate with libtasn1-2.3-3.el6_2.1.x86_64 libuuid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libuuid-2.17.2-12.14.el6.x86_64 libxml2-2.7.6-14.el6_5.1.x86_64 is a duplicate with libxml2-2.7.6-14.el6.x86_64 mdadm-3.2.6-7.el6_5.2.x86_64 is a duplicate with mdadm-3.2.6-7.el6.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 is a duplicate with 1:mod_ssl-2.2.15-29.el6.centos.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') nss-softokn-3.14.3-10.el6_5.x86_64 is a duplicate with nss-softokn-3.14.3-9.el6.x86_64 openssl-1.0.1e-16.el6_5.7.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.4.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.7.x86_64 openssl-devel-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-devel-1.0.1e-16.el6_5.7.x86_64 selinux-policy-3.7.19-231.el6_5.3.noarch is a duplicate with selinux-policy-3.7.19-231.el6_5.1.noarch tzdata-2014d-1.el6.noarch is a duplicate with tzdata-2014b-1.el6.noarch util-linux-ng-2.17.2-12.14.el6_5.x86_64 is a duplicate with util-linux-ng-2.17.2-12.14.el6.x86_64

    Read the article

  • Installing Enterprise Library 5.0 - Enterprise Library 5.0 Tutorial Part 1

    Microsoft has released Enterprise Library on April 2010. it’s free you can download and install from “Download Enterprise Library”. you can also find older version of enterprise library 4.1 still if your project needs it for maintenance purpose. but I suggest go for 5.0 as it has great enhancements and improved UI configuration tool. Will it work only with Visual Studio 2008? Yes. Yes, it works with also .NET 3.5 and Visual Studio 2008. you can take advantage of new improved UI configuration tool which comes from enterprise library 5.0 with VS2008. Please find this Enterprise Library resources. I suggest to install it with documentation and hands on labs. you can also find community links. I’ll see you in my next blog serious where I provide introduction to various blocks of Enterprise Library 5.0. span.fullpost {display:none;}

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >