Search Results

Search found 80596 results on 3224 pages for 'simplexml load file'.

Page 815/3224 | < Previous Page | 811 812 813 814 815 816 817 818 819 820 821 822  | Next Page >

  • Webfarm and IIS configuration tips/tricks

    - by steve schofield
    I was recently talking with some good friends about tips for performance and what an IIS Administrator could do on the server side.  I also see this question from time to time in the forums @ http://forums.iis.net.    Of course, you should test individual settings in a controlled environment while performing load testing before just implementing on your production farm.  IIS Compression enabled (both static and dynamic if possible, set it to 9)  If you are running IIS 6, check this article out by Scott Forsyth. Run FRT for long running pages (Failed Request Tracing) Sql Connection pooling in code Look at using PAL with performance counters ( http://blogs.iis.net/ganekar/archive/2009/08/12/pal-performance-analyzer-with-iis.aspx )  Look at load testing using visual studio load testing tools Log parser finding long running pages.  Here is a couple examples Look at CPU, Memory and disk counters.  Make sure the server has enough resources. Same machineKey account across all same nodes Localize content vs. using UNC based content on a single server (My UNC tag with great posts) Content expiration ETAG’s the same across all web-farms Disable Scalable Networking Pack Use YSlow or Developer tools in Chrome to help measure the client experience improvements. Additionally, some basic counters in for measuring applications is: I would recommend checking out the Chapter 17 in IIS 7 Resource kit. it was one of the chapters I authored. :) Concurrent Connections,  Request Per / Sec, Request Queued.  I strongly suggest testing one change at a time to see how it helps improve your performance.  Hopefully this post provides a few options to review in your environment.   Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Make a Drive Image Using an Ubuntu Live CD

    - by Trevor Bekolay
    Cloning a hard drive is useful, but what if you have to make several copies, or you just want to make a complete backup of a hard drive? Drive images let you put everything, and we mean everything, from your hard drive in one big file. With an Ubuntu Live CD, this is a simple process – the versatile tool dd can do this for us right out of the box. We’ve used dd to clone a hard drive before. Making a drive image is very similar, except instead of copying data from one hard drive to another, we copy from a hard drive to a file. Drive images are more flexible, as you can do what you please with the data once you’ve pulled it off the source drive. Your drive image is going to be a big file, depending on the size of your source drive – dd will copy every bit of it, even if there’s only one tiny file stored on the whole hard drive. So, to start, make sure you have a device connected to your computer that will be large enough to hold the drive image. Some ideas for places to store the drive image, and how to connect to them in an Ubuntu Live CD, can be found at this previous Live CD article. In this article, we’re going to make an image of a 1GB drive, and store it on another hard drive in the same PC. Note: always be cautious when using dd, as it’s very easy to completely wipe out a drive, as we will show later in this article. Creating a Drive Image Boot up into the Ubuntu Live CD environment. Since we’re going to store the drive image on a local hard drive, we first have to mount it. Click on Places and then the location that you want to store the image on – in our case, a 136GB internal drive. Open a terminal window (Applications > Accessories > Terminal) and navigate to the newly mounted drive. All mounted drives should be in /media, so we’ll use the command cd /media and then type the first few letters of our difficult-to-type drive, press tab to auto-complete the name, and switch to that directory. If you wish to place the drive image in a specific folder, then navigate to it now. We’ll just place our drive image in the root of our mounted drive. The next step is to determine the identifier for the drive you want to make an image of. In the terminal window, type in the command sudo fdisk -l Our 1GB drive is /dev/sda, so we make a note of that. Now we’ll use dd to make the image. The invocation is sudo dd if=/dev/sda of=./OldHD.img This means that we want to copy from the input file (“if”) /dev/sda (our source drive) to the output file (“of”) OldHD.img, which is located in the current working directory (that’s the “.” portion of the “of” string). It takes some time, but our image has been created…Let’s test to make sure it works. Drive Image Testing: Wiping the Drive Another interesting thing that dd can do is totally wipe out the data on a drive (a process we’ve covered before). The command for that is sudo dd if=/dev/urandom of=/dev/sda This takes some random data as input, and outputs it to our drive, /dev/sda. If we examine the drive now using sudo fdisk –l, we can see that the drive is, indeed, wiped. Drive Image Testing: Restoring the Drive Image We can restore our drive image with a call to dd that’s very similar to how we created the image. The only difference is that the image is going to be out input file, and the drive now our output file. The exact invocation is sudo dd if=./OldHD.img of=/dev/sda It takes a while, but when it’s finished, we can confirm with sudo fdisk –l that our drive is back to the way it used to be! Conclusion There are a lots of reasons to create a drive image, with backup being the most obvious. Fortunately, with dd creating a drive image only takes one line in a terminal window – if you’ve got an Ubuntu Live CD handy! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayHow to Browse Without a Trace with an Ubuntu Live CDWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy WayClone a Hard Drive Using an Ubuntu Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • CodePlex Daily Summary for Friday, February 11, 2011

    CodePlex Daily Summary for Friday, February 11, 2011Popular ReleasesSnoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WatchersNET.TagCloud: WatchersNET.TagCloud 01.09.03: Whats NewAdded New Skin TagTastic http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-TagTastic-Skin.jpg Added New Skin RoundedButton http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-RoundedButton-Skin.jpg changes Tag Count fixed on Tag Source Referrals Fixed Tag Count when multiple Tag Sources are usedExtremeML: ExtremeML v1.0 Beta 3: VS solution source code updated for compatibility with VS2010 (accommodates VS2010 breaking changes in T4 template support).Finestra Virtual Desktops: 1.1: This release adds a few more performance and graphical enhancements to 1.0. Switching desktops is now about as fast as you can blink. Desktop switching optimizations New welcome wizard for Vista/7 Fixed a few minor bugs Added a few more options to the options dialog (including ability to disable the taskbar switching)WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details. Webdeploy package sha1 checksum: 28785b7248052465ea0738a7775e8e8744d84c27fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.New ProjectsAlchemySearch: A middleware application that allows the searching of Open Text's Alchemy application from a command line or from a calling application. C# .NET 4alltuan: all tuan infoasp.net mvc 2.0 ??????: asp.net mvc 2.0 ??????Basic Text File Generator: This is an investigatory development that creates text files from a windows forms app.bisolu_luc: nothing yetCriteria Pack for EPiServer CMS: This is a collection of useful criteria for extending the personalization mechanism, Visitor Groups, in EPiServer CMS 6 R2 (or later).CTRNN.NET: CTRNN.NET is a C# implementation of Continuous Time Recurrent Neural Networks.DXperience Toolkit: Complimentary controls and libraries from DevExpress.Excel Library for Small Basic: This library extend Small Basic to allows you to read or write contents of a cell in the Excel file from Small Basic program. ???????? Small Basic ????、Small Basic ??? Excel ???????????????????。HealthVault Eventing Sample: A sample application demonstrating the new HealthVault Eventing feature.Hitcents Blog - Example Code: Contains various source code examples from articles at http://www.hitcents.com/blog. Brought to you by Hitcents.INI Modifier: A proprietary application to amend users ini files for JBAe automatically from a central control file.issueIT.net - ”I just found the last bug”: issueIT is the best open source web-based issue tracker for .NET. It’s easy to customize to suit your organization. This project is not finished yet - it is only at a very early stage Leage of Legends Masteries Tool: -Small, portable and fast tool to help you set your masteries. -No Install. -Masteries saved in the same folder: "lolMasterSet.dat" (text file) How to use: - Run and then go to masteries tab in League of Legends luncher. You should see the overlay. Memory View controls: Memory View controls are a pair of controls for real time and static viewing and editing of a memory space. MemoryHexEditor is a hex editor that supports viewing and editing memory, and MemoryHeatMap is a time-based graphical view that shows how memory is read and written to.Moq Contrib: Community contributions to Moq.Operating System Basic: Sistema operacional desenvolvido em QuickBasic, durante minha adolecência.PAiRS - A WPF Memory Card Game: PAiRS is an implementation of a card matching game in which you are given an even number of cards face down in a grid, and you try to flip over 2 cards at a time to create a match until all cards are matched. PAiRS is built using C#, .Net 3.5+ and WPF.RIBA - Rich Internet Business Application for Silverlight: RIBA is an example LOB application with many features to show how to build distributed and secure Line of Business applications based on Silverlight 4. You can find the official blog on www.silverlightblog.netSearcher: Searcher is a Windows Phone project which allows you to quickly search using multiple search engines. SharePoint Term Store Powershell Utilities: An assortment of PowerShell script-based utilities to perform actions on the SharePoint Managed Metadata Term Store. PowerShell script-based utilities have been chosen rather than cmdlets as they are more suitable for environments that have strict installation policies.SmartHotelFoundation: Smart Hotel FoundationSumer Filemanager: This project ensured fully ajax based file management.The Open Source PMU: The Open Source PMU Project provides resources that enable you to build your own SynchroPhasor sensor for use with the openPDC project, research, development, or electric grid observation.Trail Blazer: Movie Trailer Fetcher - Downloads trailers for movies in a collection and saves them in movie folder. It is being developed as a companion to Meta Data Fetchers for the Media Browser project. It is being developed as a standalone C# app but a frontend in MCML is being considered.University Project Tracker: University Project TrackerVideo Backup Fusion: The complete and simply to use backup solution for video portalsWPF ShellFactory: WPF ShellFactory is an easy-to-use application framework for WPF, using a system derived from M-V-VM for its views. It supports multiple modules, either statically or dynamically loaded (i.e. plugins). It also contains mechanisms for application-wide services.

    Read the article

  • Cannot install any software from the Software Center due to ttf-mscorefonts-installer package error

    - by Dei
    When I try to install any software from ubuntu software center it comes with error: An unhandled error occured There seems to be a programming error in aptdaemon. This is the software that allows you to install/remove software and to perform other package management related tasks. details Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 961, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1085, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 226, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:I wasn't able to locate file for the ttf-mscorefonts-installer package. This might mean you need to manually fix this package. Please help me!

    Read the article

  • Upgrading VSIX extensions from VS2012 to VS2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/27/upgrading-vsix-extensions-from-vs2012-to-vs2013.aspx  As consumers of your Visual Studio extensions start to move over to VS 2013, you will have to upgrade the Visual Studio extensions you build for Visual Studio 2012 to Visual Studio 2013 and republish to the Visual Studio extension gallery. Failing which, it will not be possible for your consumers to install and use your extensions on Visual Studio 2013.   Objective In this blog post, I’ll show you how simple it is to upgrade your Visual Studio 2012 extension to Visual Studio 2013. There aren’t any reported breaking changes between VS 2012 SDK and VS 2013 SDK, the upgrade usually involves, rebuilding the extension against VS 2013 SDK and updating the vsix manifest file.              Walkthrough Download the Visual Studio 2013 SDK - You will need to download the Visual Studio 2013 SDK in order to open up the Visual Studio extension project in Visual Studio 2013. The SDK can be downloaded from here. Install the SDK before you proceed.                2. Once the VS 2013 SDK has been installed, open up your package project. For the purposes of this blog post, I’ll open up the Avanade Extension – Software Inventory in Visual Studio 2013. You will notice that Visual Studio doesn’t load the project but let’s you know that the project needs to be Migrated.                  3. Right click the project and choose the option ‘Reload Project’ from the Context Menu.                  4. Choosing the Reload Project option brings up an upgrade window, telling you that the upgrade is a one way only upgrade i.e. the project will be changed to work with Visual Studio 2013 and you will not be able to open the project up in Visual Studio 2012. My recommendation would be to create a Visual Studio 2013 branch and upgrading the project in that branch only, so if you need to go back to Visual Studio 2012 project at some point, you have a handy reference in a separate branch.             5. Upon clicking Ok, the project is updated. See below, the following changes are made at the time of upgrade,           - The runtime version is updated in the Resources.Designer.cs file                      - The Minimum version of Visual Studio in the package project file is changed from 11.0 to 12.0                    6. Reference VS 2013 dll’s rather than VS 2012 dll’s. So reference Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.Controls.dll from C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0 and C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v4.5. If you have any other API references, then change the references to point to VS 2013 instead of VS 2012.                          7. Rebuild your solution to ensure there are no breaking changes. Success!                8. Update VSIX Manifest file (the file source.extnsion.vsixmanifest contains the meta data for your VSIX).          - Update the Install Targets from 11.0 to 12.0. This basically enforces that the extension can be installed on Visual Studio 2013 version of Visual Studio.                         - Update the Dependencies from Visual Studio MPF 11.0 to Visual Studio MPF 12.0              9. Rebuild the solution and open up the bin folder for the Package project and look for the file *.vsix file [Microsoft Visual Studio Extension].         - This is basically the installer for your extension.                 - Double click the installer to launch the installer wizard. Viola! You can see the package installation wizard opens up and gives you the option to install the extension for Visual Studio 2013.                    - Click Install to Continue                    - Note – If you run into the exception “23/06/2013 10:42:18 - Install Error : Microsoft.VisualStudio.ExtensionManager.InstallByMsiException: The InstalledByMSI element in extension Avanade Extensions cannot be 'true' when installing an extension through the Extensions and Updates Installer.  The element can only be 'true' when an MSI lays down the extension manifest file.” Ensure you have the option “This VSIX is installed by Windows Installer” unchecked in the Install Targets tab.        10. Verifying that the extension has installed correctly.           - Open Extension Manager and verify that the installed extension shows up in the extension manager “list of installed VSIX”.                      11. First Look at the updated Extension                         - The links have now been moved to the context menu, so to see the navigation links, you’ll have to right click on the icon and select the option from the context menu.                                        Note – The Avanade Extension being used in the demo has been developed by Utkarsh and Tarun. The Software Inventory Extension for Visual Studio 2012…  allows you to see the list of Software installed on the hosted build server right from with in Visual Studio,  the extension also allows you to export this list to excel. More details on how this has been implemented can be found here.   I hope you found this useful. In case you have any questions or feedback, feel free to reach out on Visual Studio extensibility MSDN forums or via Microsoft Visual Studio feedback forum. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • Why can't i change the permissions of files I have access to?

    - by Erik
    I'm logged into a server as user "ubuntu" and I've got files that look like this: -rw-rw-r-- 1 www-data www-data 33150 2012-06-04 22:17 file-a.png -rw-rw-r-- 1 www-data www-data 36371 2012-06-04 22:15 file-b.png -rw-rw-r-- 1 www-data www-data 41439 2012-06-04 22:16 file-c.png the ubuntu user is a member of the group www-data: > groups unbuntu ubuntu : ubuntu www-data so shouldn't I be able to change other permissions since I have access to the file? I'm not an expert on the user/group stuff ... so this is just perplexing me. I'm trying to run: > chmod o-r * I realize I can do it with sudo, easily, but I'm trying to understand why I can't modify the files without sudo. Thanks for any help!

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Configure clean URLs using Laravel using a rewrite rule to index.php

    - by yannis hristofakis
    Recently I've started learning Laravel , I have none experience with framework before. I'm encountering the following problem .I'm trying to configure the .htaccess file so I can have clean URLs but the only thing I get are 404 Not Found error pages. I have created a virtual host - you can see below the configuration file - and changed the .htaccesss file on the public directory. /etc/apache2/sites-available <VirtualHost *:80> ServerAdmin [email protected] ServerName laravel.lar DocumentRoot "/home/giannis/Desktop/laravel/public" <Directory "/home/giannis/Desktop/laravel/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccesss file: laravel/public # Apache configuration file # http://httpd.apache.org/docs/current/mod/quickreference.html # Note: ".htaccess" files are an overhead for each request. This logic should # be placed in your Apache config whenever possible. # http://httpd.apache.org/docs/current/howto/htaccess.html # Turning on the rewrite engine is necessary for the following rules and # features. "+FollowSymLinks" must be enabled for this to work symbolically. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> # For all files not found in the file system, reroute the request to the # "index.php" front controller, keeping the query string intact <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> In order to test it, I have created a view named about and made the proper routing. If I link to http://laravel.lar/index.php/about/ I'm routing to the about page instead if I link to http://laravel.lar/about/ I get a 404 Not Found error. I'm using a Debian based system.

    Read the article

  • How to Restore the Real Internet Explorer Desktop Icon in Windows 7

    - by The Geek
    Remember how previous versions of Windows had an Internet Explorer icon on the desktop, and you could right-click it to quickly access the Internet Options screen? It’s completely gone in Windows 7, but a geeky hack can bring it back. Microsoft removed this feature to comply with all those murky legal battles they’ve had, and their alternate suggestion is to create a standard shortcut to iexplore.exe on the Desktop, but it’s not the same thing. We’ve got a registry hack to bring it back. This guest article was written by Ramesh from the WinHelpOnline blog, where he’s got loads of really geeky registry hacks. Bring Back the Internet Explorer Namespace Icon in Windows 7 the Easy Way If you just want the IE icon back, all you need to do is download the RealInternetExplorerIcon.zip file, extract the contents, and then double-click on the w7_ie_icon_restore.reg file. That’s all you have to do. There’s also an undo registry file there if you want to get rid of it. Download the Real Internet Explorer Icon Registry Hack Manual Registry Hack If you prefer doing things the manual way, or just really want to understand how this hack works, you can follow through the manual steps below to learn how it was done, but we’ll have to warn you that it’s a lot of steps. Launch Regedit.exe using the Start Menu search box, and then navigate to the following location: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30309D} Right-click on the key on the left-hand pane, choose Export, and save it to a .REG file (say, ie-guid.reg) Open up the REG file using Notepad… From the Edit menu, click Replace, and replace every occurrence of the following GUID string {871C5380-42A0-1069-A2EA-08002B30309D} … with a custom GUID string, such as: {871C5380-42A0-1069-A2EA-08002B30301D} Save the REG file and close Notepad, and then double-click on the file to merge the contents to the registry. Either re-open the registry editor, or use the F5 key to reload everything with the new changes (this step is important). Now you can navigate downto the following registry key: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ Shellex \ ContextMenuHandlers \ ieframe Double-click on the (default) key in the right-hand pane and set its data as: {871C5380-42A0-1069-A2EA-08002B30309D} With this done, press F5 on the desktop and you’ll see the Internet Explorer icon that looks like this: The icon appears incomplete without the Properties command in right click menu, so keep reading. Final Registry Hack Adjustments Click on the following key, which should still be viewable in your Registry editor window from the last step. HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D} Double-click LocalizedString in the right-hand pane and type the following data to rename the icon. Internet Explorer Select the following key: HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D}\shell Add a subkey and name it as Properties, then select the Properties key, double-click the (default) value and type the following: P&roperties Create a String value named Position, and type the following data bottom At this point the window should look something like this: Under Properties, create a subkey and name it as Command, and then set its (default) value as follows: control.exe inetcpl.cpl Navigate down to the following key, and then delete the value named LegacyDisable HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ shell \ OpenHomePage Now head to the this key: HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows \ CurrentVersion \ Explorer \ Desktop \ NameSpace Create a subkey named {871C5380-42A0-1069-A2EA-08002B30301D} (which is the custom GUID that we used earlier in this article.) Press F5 to refresh the Desktop, and here is how the Internet Explorer icon would look like, finally. That’s it! It only took 24 steps, but you made it through to the end—of course, you could just download the registry hack and get the icon back with a double-click. Similar Articles Productive Geek Tips Quick Help: Restore Show Desktop Icon in Windows VistaQuick Help: Restore Flip3D Icon in Windows VistaAdd Internet Explorer Icon to Windows XP / Vista DesktopHide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • How to install Revolution R Enterprise?

    - by Abe
    Revolution R Enterprise is available as a red-hat rpm file. Normally I would use alien to install an rpm file, but the instructions for installing this package have an install.py file that I am supposed to execute. When I ./install.py, I get the following instructions: rpm: please use alien to install rpm packages on Debian, if you are really sure use --force-debian switch. See README.Debian for more details. There is no README.Debian file in the directory, and although I am not proficient in python, I can tell that there are at least four different directories with *rpm files in them. Has anyone had success with this? If possible, I'd prefer to install the Enterprise version instead of community version in the Ubuntu repository so that I can test it out.

    Read the article

  • F# WPF Form &ndash; the basics

    - by MarkPearl
    I was listening to Dot Net Rocks show #560 about F# and during the podcast Richard Campbell brought up a good point with regards to F# and a GUI. In essence what I understood his point to be was that until one could write an end to end application in F#, it would be a hard sell to developers to take it on. In part I agree with him, while I am beginning to really enjoy learning F#, I can’t but help feel that I would be a lot further into the language if I could do my Windows Forms like I do in C# or VB.NET for the simple reason that in “playing” applications I spend the majority of the time in the UI layer… So I have been keeping my eye out for some examples of creating a WPF form in a F# project and came across Tim’s F# Twitter Stream Sample, which had exactly this…. of course he actually had a bit more than a basic form… but it was enough for me to scrap the insides and glean what I needed. So today I am going to make just the very basic WPF form with all the goodness of a XAML window. Getting Started First thing we need to do is create a new solution with a blank F# application project – I have made mine called FSharpWPF. Once you have the project created you will need to change the project type from a Console Application to a Windows Application. You do this by right clicking on the project file and going to its properties… Once that is done you will need to add the appropriate references. You do this by right clicking on the References in the Solution Explorer and clicking “Add Reference'”. You should add the appropriate .Net references below for WPF & XAMl to work. Once these references are added you then need to add your XAML file to the project. You can do this by adding a new item to the project of type xml and simply changing the file extension from xml to xaml. Once the xaml file has been added to the project you will need to add valid window XAML. Example of a very basic xaml file is shown below… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF WPF Form" Height="350" Width="525"> <Grid> </Grid> </Window> Once your xaml file is done… you need to set the build action of the xaml file from “None” to “Resource” as depicted in the picture below. If you do not set this you will get an IOException error when running the completed project with a message along the lines of “Cannot locate resource ‘window.xaml’ You then need to tie everything up by putting the correct F# code in the Program.fs to load the xaml window. In the Program.fs put the following code… module Program open System open System.Collections.ObjectModel open System.IO open System.Windows open System.Windows.Controls open System.Windows.Markup [<STAThread>] [<EntryPoint>] let main(_) = let w = Application.LoadComponent(new System.Uri("/FSharpWPF;component/Window.xaml", System.UriKind.Relative)) :?> Window (new Application()).Run(w) Once all this is done you should be able to build and run your project. What you have done is created a WPF based window inside a FSharp project. It should look something like below…   Nothing to exciting, but sufficient to illustrate the very basic WPF form in F#. Hopefully in future posts I will build on this to expose button events etc.

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • ubuntu-12.04-wubi-i386.tar.xz for the wubi installer

    - by Alejandro
    I ran the wubi installer (from an extracted ubuntu ISO) and it downloads ubuntu-12.04-wubi-i386.tar.xz but it's slow and non-resumable so I cancelled it, found a mirror of the file online and downloaded it using Intenet download manager. Where should I be placing ubuntu-12.04-wubi-i386.tar.xz file so the wubi installer won't have to download that file anymore? Thank you. Update: I extracted the archive and there are two files, I'm afraid I do not know where to place them.

    Read the article

  • Problem in installing OpenOffice 3.1 on Solaris 10

    - by Sunil Kumar Sahoo
    I want OpenOffice in Solaris. So I downloaded OpenOffice from the link below. http://download.openoffice.org/other.html#tested-full My OpenOffice is in .tar.gz format so I unzipped the file using gunzip and then untar'ed the file using tar xvf command. Now I got a directory containing packages subfolder. When I cd to that directory I found too many subdirectories. I could not find a single .pkg file or .jar file or .sh file so that I can install the OpenOffice in Solaris 10. How can I install OpenOffice in Solaris 10 given the scenario above?

    Read the article

  • Cannot open software sources after removing a PPA

    - by rkwbcca
    After deleting a ppa entry in the sources.list file, I was not able to open the software sources application. Opening the software centre is fine. I tried running gksudo software-properties-gtk and got the follwong message: SoftwareProperties.__init__(self, options=options, datadir=datadir) File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 96, in __init__ self.reload_sourceslist() File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 580, in reload_sourceslist self.distro.get_sources(self.sourceslist) File "/usr/lib/python2.7/dist-packages/aptsources/distro.py", line 91, in get_sources raise NoDistroTemplateException("Error: could not find a " aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template Would appreciate if you can let me know how to solve this problem.

    Read the article

  • Filling in PDF Forms with ASP.NET and iTextSharp

    The Portable Document Format (PDF) is a popular file format for documents. PDF files are a popular document format for two primary reasons: first, because the PDF standard is an open standard, there are many vendors that provide PDF readers across virtually all operating systems, and many proprietary programs, such as Microsoft Word, include a "Save as PDF" option. Consequently, PDFs server as a sort of common currency of exchange. A person writing a document using Microsoft Word for Windows can save the document as a PDF, which can then be read by others whether or not they are using Windows and whether or not they have Microsoft Word installed. Second, PDF files are self-contained. Each PDF file includes its complete text, fonts, images, input fields, and other content. This means that even complicated documents with many images, an intricate layout, and with user interface elements like textboxes and checkboxes can be encapsulated in a single PDF file. Due to their ubiquity and layout capabilities, it's not uncommon for a websites to use PDF technology. For example, when purchasing goods at an online store you may be offered the ability to download an invoice as a PDF file. PDFs also support form fields, which are user interface elements like textboxes, checkboxes, comboboxes, and the like. These form fields can be entered by a user viewing the PDF or, with a bit of code, they can be entered programmatically. This article is the first in a multi-part series that examines how to programmatically work with PDF files from an ASP.NET application using iTextSharp, a .NET open source library for PDF generation. This installment shows how to use iTextSharp to open an existing PDF document with form fields, fill those form fields with user-supplied values, and then save the combined output to a new PDF file. Read on to learn more! Read More >

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • Convertion of tiff image in Python script - OCR using tesseract

    - by PYTHON TEAM
    I want to convert a tiff image file to text document. My code perfectly as I expected to convert tiff images with usual font but its not working for french script font . My tiff image file contains text. The font of text is in french script format.I here is my code import Image import subprocess import util import errors tesseract_exe_name = 'tesseract' # Name of executable to be called at command line scratch_image_name = "temp.bmp" # This file must be .bmp or other Tesseract-compatible format scratch_text_name_root = "temp" # Leave out the .txt extension cleanup_scratch_flag = True # Temporary files cleaned up after OCR operation def call_tesseract(input_filename, output_filename): """Calls external tesseract.exe on input file (restrictions on types), outputting output_filename+'txt'""" args = [tesseract_exe_name, input_filename, output_filename] proc = subprocess.Popen(args) retcode = proc.wait() if retcode!=0: errors.check_for_errors() def image_to_string(im, cleanup = cleanup_scratch_flag): """Converts im to file, applies tesseract, and fetches resulting text. If cleanup=True, delete scratch files after operation.""" try: util.image_to_scratch(im, scratch_image_name) call_tesseract(scratch_image_name, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text def image_file_to_string(filename, cleanup = cleanup_scratch_flag, graceful_errors=True): If cleanup=True, delete scratch files after operation.""" try: try: call_tesseract(filename, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) except errors.Tesser_General_Exception: if graceful_errors: im = Image.open(filename) text = image_to_string(im, cleanup) else: raise finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text if __name__=='__main__': im = Image.open("/home/oomsys/phototest.tif") text = image_to_string(im) print text try: text = image_file_to_string('fnord.tif', graceful_errors=False) except errors.Tesser_General_Exception, value: print "fnord.tif is incompatible filetype. Try graceful_errors=True" print value text = image_file_to_string('fnord.tif', graceful_errors=True) print "fnord.tif contents:", text text = image_file_to_string('fonts_test.png', graceful_errors=True) print text

    Read the article

  • How to Import Data Taxonomy Into Managed Metada Service in SharePoint 2010

    - by Wayne
    First, Open the Term Store Management Tool (Site Actions > Site Settings > Term Store Management) an download the sample import file. (Remember, Service Applications are configured on a per Web Application basis, so use any site collection inside a WebApp configured with your MMS.) Second, Insert your data. In the photo below, I demonstrate creating a term called USA. Under that, I create the term Alabama. Under that, 4 cities. Then under USA, a term called Alaska. The point is that the we have a hierarchy. Using the import file, we can go 7 layers deep. The last steps is Save the file, head back into the Term Store Management Tool, select/create a group, and Import the file.

    Read the article

  • EXALYTICS - Unable to run Summary Advisor when BI Admin Client Tool is installed separately

    - by Ahmed Awan
    Unable to launch Summary Advisor when BI Admin Developer Client tool (version 11.1.1.6.0) is separately installed. In Windows Event application log, the error is pointing to missing AggrAdvisor.xml file. The file AggrAdvisor.xml is missing in BI client install location. Workaround: Download file AggrAdvisor.xml and copy to following location will resolve this issue: <your drive>:\Program Files\Oracle Business Intelligence Enterprise Edition Plus Client\oraclebi\orahome\bifoundation\server\locale\l_en\

    Read the article

  • Remove Programs from the Open With Menu in Explorer

    - by Matthew Guay
    Would you like to clean up the Open with menu in Windows Explorer?  Here’s how you can remove program entries you don’t want in this menu on any version of Windows. Have you ever accidently opened an mp3 with Notepad, or a zip file with Word?  If so, you’re also likely irritated that these programs now show up in the Open with menu in Windows Explorer every time you select one of those files.  Whenever you open a file type with a particular program, Windows will add an entry for it to the Open with menu.  Usually this is helpful, but it can also clutter up the menu with wrong entries. On our computer, we have tried to open a PDF file with Word and Notepad, neither which can actually view the PDF itself.  Let’s remove these entries.  To do this, we need to remove the registry entries for these programs.  Enter regedit in your Start menu search or in the Run command to open the Registry editor. Backup your registry first just in case, so you can roll-back any changes you make if you accidently delete the wrong value.  Now, browse to the following key: HKEY_CURRENT_USER \Software \Microsoft \Windows \CurrentVersion \ Explorer \FileExts\ Here you’ll see a list of all the file extensions that are registered on your computer. Browse to the file extension you wish to edit, click the white triangle beside it to see the subfolders, and select OpenWithList.  In our test, we want to change the programs associated with PDF files, so we select the OpenWithList folder under .pdf. Notice the names of the programs under the Data column on the right.  Right-click the value for the program you don’t want to see in the Open With menu and select Delete. Click Yes at the prompt to confirm that you want to delete this value. Repeat these steps with all the programs you want to remove from this file type’s Open with menu.  You can go ahead and remove entries from other file types as well if you wish. Once you’ve removed the entries you didn’t want to see, check out the Open with menu in Explorer again.  Now it will be much more streamlined and will only show the programs you want to see. Conclusion This simple trick can help you keep your Open with menu tidy, and only show the programs you want in the list.  It can be irritating to accidently open files in programs that can’t even read them.  This trick works in all versions of Windows, including 2000, XP, Vista, and Windows 7. Similar Articles Productive Geek Tips Remove ISP Text or Corporate Branding from Internet Explorer Title BarRemove the Username From the Start Menu in XPKeep Start Menu From Closing After Opening ApplicationsRemove PartyPoker (Or Other Items) from the Internet Explorer Tools MenuUninstall, Disable, or Delete Internet Explorer 8 from Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

< Previous Page | 811 812 813 814 815 816 817 818 819 820 821 822  | Next Page >