Search Results

Search found 6952 results on 279 pages for 'gui designer'.

Page 241/279 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Geek Fun: Virtualized Old School Windows – Windows 95

    - by Matthew Guay
    Last week we enjoyed looking at Windows 3.1 running in VMware Player on Windows 7.  Today, let’s upgrade our 3.1 to 95, and get a look at how most of us remember Windows from the 90’s. In this demo, we’re running the first release of Windows 95 (version 4.00.950) in VMware Player 3.0 running on Windows 7 x64.  For fun, we ran the 95 upgrade on the 3.1 virtual machine we built last week. Windows 95 So let’s get started.  Here’s the first setup screen.  For the record, Windows 95 installed in about 15 minutes or less in VMware in our test. Strangely, Windows 95 offered several installation choices.  They actually let you choose what extra parts of Windows to install if you wished.  Oh, and who wants to run Windows 95 on your “Portable Computer”?  Most smartphones today are more powerful than the “portable computers” of 95. Your productivity may vastly increase if you run Windows 95.  Anyone want to switch? No, I don’t want to restart … I want to use my computer! Welcome to Windows 95!  Hey, did you know you can launch programs from the Start button? Our quick spin around Windows 95 reminded us why Windows got such a bad reputation in the ‘90’s for being unstable.  We didn’t even get our test copy fully booted after installation before we saw our first error screen.  Windows in space … was that the most popular screensaver in Windows 95, or was it just me? Hello Windows 3.1!  The UI was still outdated in some spots.   Ah, yes, Media Player before it got 101 features to compete with iTunes. But, you couldn’t even play CDs in Media Player.  Actually, CD player was one program I used almost daily in Windows 95 back in the day. Want some new programs?  This help file about new programs designed for Windows 95 lists a lot of outdated names in tech.    And, you really may want some programs.  The first edition of Windows 95 didn’t even ship with Internet Explorer.   We’ve still got Minesweeper, though! My Computer had really limited functionality, and by default opened everything in a new window.  Double click on C:, and it opens in a new window.  Ugh. But Explorer is a bit more like more modern versions. Hey, look, Start menu search!  If only it found the files you were looking for… Now I’m feeling old … this shutdown screen brought back so many memories … of shutdowns that wouldn’t shut down! But, you still have to turn off your computer.  I wonder how many old monitors had these words burned into them? So there’s yet another trip down Windows memory lane.  Most of us can remember using Windows 95, so let us know your favorite (or worst) memory of it!  At least we can all be thankful for our modern computers and operating systems today, right?  Similar Articles Productive Geek Tips Geek Fun: Remember the Old-School SkiFree Game?Geek Fun: Virtualized old school Windows 3.11Stupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Stupid Geek Tricks: Select Multiple Windows on the TaskbarHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Share a Printer on Your Network from Vista or XP to Windows 7

    - by Mysticgeek
    The other day we looked at sharing a printer between Windows 7 machines, but you may only have one Windows 7 machine and the printer is connected to a Vista or XP computer. Today we show you how to share a printer from either Vista or XP to Windows 7. We previously showed you how to share files and printers between Windows 7 and XP. But what if you have a printer connected to an XP or Vista machine in another room, and you want to print to it from Windows 7? This guide will walk you through the process. Note: In these examples we’re using 32-bit versions of Windows 7, Vista, and XP on a basic home network. We are using an HP PSC 1500 printer, but keep in mind every printer is different so finding and installing the correct drivers will vary. Share a Printer from Vista To share the printer on a Vista machine click on Start and enter printers into the search box and hit Enter. Right-click on the printer you want to share and select Sharing from the context menu. Now in Printer Properties, select the Sharing tab, mark the box next to Share this printer, and give the printer a name. Make sure the name is something simple with no spaces then click Ok. Share a Printer from XP To share a printer from XP click on Start then select Printers and Faxes. In the Printers and Faxes window right-click on the printer to share and select Sharing. In the Printer Properties window select the Sharing tab and the radio button next to Share this printer and give it a short name with no spaces then click Ok. Add Printer to Windows 7 Now that we have the printer on Vista or XP set up to be shared, it’s time to add it to Windows 7. Open the Start Menu and click on Devices and Printers. In Devices and Printers click on Add a printer. Next click on Add a network, wireless or Bluetooth printer. Windows 7 will search for the printer on your network and once its been found click Next. The printer has been successfully added…click Next. Now you can set it as the default printer and send a test page to verify everything works. If everything is successful, close out of the add printer screens and you should be good to go.   Alternate Method If the method above doesn’t work, you’ll can try the following for either XP or Vista. In our example, when trying to add the printer connected to our XP machine, it wasn’t recognized automatically. If you’re search pulls up nothing then click on The printer that I want isn’t listed. In the Add Printer window under Find a printer by name or TCP/IP address click the radio button next to Select a shared printer by name. You can either type in the path to the printer or click on Browse to find it. In this instance we decided to browse to it and notice we have 5 computers found on the network. We want to be able to print to the XPMCE computer so we double-click on that. Type in the username and password for that computer… Now we see the printer and can select it. The path to the printer is put into the Select a shared printer by name field. Wait while Windows connects to the printer and installs it… It’s successfully added…click Next. Now you can set it as the default printer or not and print a test page to make sure everything works successfully. Now when we go back to Devices and Printers under Printers and Faxes, we see the HP printer on XPMCE. Conclusion Sharing a printer from one machine to another can sometimes be tricky, but the method we used here in our setup worked well. Since the printer we used is fairly new, there wasn’t a problem with locating any drivers for it. Windows 7 includes a lot of device drivers already so you may be surprised on what it’s able to install. Your results may vary depending on your type of printer, Windows version, and network setup. This should get you started configuring the machines on your network—hopefully with good results.  If you you have two Windows 7 computers, then sharing a printer or files is easy through the Homegroup feature. You can also share a printer between Windows 7 machines on the same network but not Homegroup. Similar Articles Productive Geek Tips Share a Printer Between Windows 7 Machines Not in the Same HomegroupShare Files and Printers between Windows 7 and XPHow To Share Files and Printers Between Windows 7 and VistaEnable Mapping to \HostnameC$ Share on Windows 7 or VistaUse the Homegroup Feature in Windows 7 to Share Printers and Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Share a Printer on Your Network from Vista or XP to Windows 7

    - by Mysticgeek
    The other day we looked at sharing a printer between Windows 7 machines, but you may only have one Windows 7 machine and the printer is connected to a Vista or XP computer. Today we show you how to share a printer from either Vista or XP to Windows 7. We previously showed you how to share files and printers between Windows 7 and XP. But what if you have a printer connected to an XP or Vista machine in another room, and you want to print to it from Windows 7? This guide will walk you through the process. Note: In these examples we’re using 32-bit versions of Windows 7, Vista, and XP on a basic home network. We are using an HP PSC 1500 printer, but keep in mind every printer is different so finding and installing the correct drivers will vary. Share a Printer from Vista To share the printer on a Vista machine click on Start and enter printers into the search box and hit Enter. Right-click on the printer you want to share and select Sharing from the context menu. Now in Printer Properties, select the Sharing tab, mark the box next to Share this printer, and give the printer a name. Make sure the name is something simple with no spaces then click Ok. Share a Printer from XP To share a printer from XP click on Start then select Printers and Faxes. In the Printers and Faxes window right-click on the printer to share and select Sharing. In the Printer Properties window select the Sharing tab and the radio button next to Share this printer and give it a short name with no spaces then click Ok. Add Printer to Windows 7 Now that we have the printer on Vista or XP set up to be shared, it’s time to add it to Windows 7. Open the Start Menu and click on Devices and Printers. In Devices and Printers click on Add a printer. Next click on Add a network, wireless or Bluetooth printer. Windows 7 will search for the printer on your network and once its been found click Next. The printer has been successfully added…click Next. Now you can set it as the default printer and send a test page to verify everything works. If everything is successful, close out of the add printer screens and you should be good to go.   Alternate Method If the method above doesn’t work, you’ll can try the following for either XP or Vista. In our example, when trying to add the printer connected to our XP machine, it wasn’t recognized automatically. If you’re search pulls up nothing then click on The printer that I want isn’t listed. In the Add Printer window under Find a printer by name or TCP/IP address click the radio button next to Select a shared printer by name. You can either type in the path to the printer or click on Browse to find it. In this instance we decided to browse to it and notice we have 5 computers found on the network. We want to be able to print to the XPMCE computer so we double-click on that. Type in the username and password for that computer… Now we see the printer and can select it. The path to the printer is put into the Select a shared printer by name field. Wait while Windows connects to the printer and installs it… It’s successfully added…click Next. Now you can set it as the default printer or not and print a test page to make sure everything works successfully. Now when we go back to Devices and Printers under Printers and Faxes, we see the HP printer on XPMCE. Conclusion Sharing a printer from one machine to another can sometimes be tricky, but the method we used here in our setup worked well. Since the printer we used is fairly new, there wasn’t a problem with locating any drivers for it. Windows 7 includes a lot of device drivers already so you may be surprised on what it’s able to install. Your results may vary depending on your type of printer, Windows version, and network setup. This should get you started configuring the machines on your network—hopefully with good results.  If you you have two Windows 7 computers, then sharing a printer or files is easy through the Homegroup feature. You can also share a printer between Windows 7 machines on the same network but not Homegroup. Similar Articles Productive Geek Tips Share a Printer Between Windows 7 Machines Not in the Same HomegroupShare Files and Printers between Windows 7 and XPHow To Share Files and Printers Between Windows 7 and VistaEnable Mapping to \HostnameC$ Share on Windows 7 or VistaUse the Homegroup Feature in Windows 7 to Share Printers and Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Surface development: it&rsquo;s just like software development

    - by Dennis Vroegop
    Surface is magic. Everyone using it seems to think that way. And I have to be honest, after working for almost 2 years with the platform I still get that special feeling the moment I turn on the unit to do some more work. The whole user experience, the rich environment of the SDK, the touch, even the look and feel of the Surface environment is so much different from the stuff I’ve been working on all my career that I am still bewildered by it. But… and this is a big but.. in the end we’re still talking about a computer and that needs software to become useful. Deep down the magic of the Surface unit there is a PC somewhere, running Windows Vista and the .net framework 3.5. When you write that magic software that makes the platform come alive you’re still working with .net, WPF/XNA, C#, VB.Net and all those other tools and technologies you know so well. Sure, the whole user experience is different from what you’ve known. And the way of thinking about users, their interaction and the positioning of screen elements requires a whole new paradigm. And that takes time. It took me about half a year before I had the feeling I got it nailed down. But when that moment came (about 18 months ago…) I realized that everything I had learned so far on software development still is true when it comes to Surface. The last 6 months I have been working with some people with a different background to start a new company. The idea was that the new company would be focussing on Surface and Surface only. These people come from a marketing background and had some good ideas for some applications. And I have to admit: their ideas were good. Very good. Where it all fell down of course is that these ideas need to be implemented in a piece of software. And creating great software takes skilled developers and a lot of time and money. That’s where things went wrong: the marketing guys didn’t realize and didn’t want to realize that software development is a job that takes skill. You can’t just hire a bunch of developers and expect them to deliver the best sort of software, especially not when it comes to Surface. I tried to explain that yes, their User Interface in Photoshop looked great, but no: I couldn’t develop an application like that in a weeks time. Even worse: the while backend of the software (WCF for communications, SQL Server for the database, etc) would take a lot more time than the frontend. They didn’t understand. It took them a couple of days to drawn the UI in Photoshop so in Blend I should be able to build the software in about the same amount of time. Well, you and I know that it doesn’t work that way. Software is hard to write, and even harder to write well, and it takes skill and dedication. It’s not something you can do as fast as you can draw a mock up for a Surface application in Photohop. The same holds true for web applications of course. A lot of designers there fail to appreciate the hard work that goes into writing the plumbing for a good web app that can handle thousands of users. Although the UI is very important, it’s not all there is to it. And in Surface development this is the same. The UI should create the feeling of magic, but the software behind it is what makes it come alive. And that takes time. A lot of time. So brush of you skills and don’t throw them away if you start developing for Surface. Because projects (and colaborations) can fail there as hard as they can in any other area of software development. On a side note: we decided to stop the colaboration (something the other parties involved didn’t appreciate and were very angry about) and decided to hire a designer for the Surface projects. The focus is back where it belongs: on the software development we know so well and have been doing very well for 13 years. UI is just a part of the whole project and not the end product. So my company Detrio is still going strong when it comes to develivering Surface solutions but once again from a technological background, not a marketing background.

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • I can't update my system properly, "no package header" error

    - by joel
    Every time I try to run sudo apt-get update or try running updates from the GUI interface I run into the following problem or something similar: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_restricted_binary-i386_Packages E: The package lists or status file could not be parsed or opened. I've tried purging using sudo rm -rf <filename> where <filename> is the listed file above, and then running sudo apt-get update to fix it (as listed elsewhere in this forum) and no luck, just keep getting this message. I'm running Ubuntu 12.04 and this is getting really frustrating... I just want a system that runs smoothly and doesn't require it's hand to be held when it comes to updates. Tried the solutions posted below and am still receiving the same errors, sample output: W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_restricted_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_multiverse_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_source_Sources Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_source_Sources Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_universe_binary-amd64_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_main_i18n_Translation-en Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_universe_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How to Install Oracle Software on Remote Linux Server

    - by James Taylor
    It is becoming more common these days to install Oracle software on remote Linux servers. This issue has always existed but was generally resolved either by silent installs or by someone physically going to the server to install the software. This is becoming more difficult with the popular virtualisation and cloud deployment strategies. This post provides the steps involved to install Oracle Software using the GUI interface on a remote Linux server. There are many ways to achieve this, the way I resolve this issue is via Virtual Network Computing (VNC) as it is shipped with RedHat and OEL out of the box. For this post I’m using OEL 5 deployed on a OVM guest. If not already done so download and install a client version of VNC so you can connect to the server. There are many out there, for the purpose of this post I use UltraVNC. You can download a free version from http://www.uvnc.com/download/index.html By default VNC Server is installed in your RedHat and OEL OS, but it is not configured. The way VNC works is when started it creates a client instance for the user and binds it to a specific port. So if have an account on the Linux box you can setup a VNC Server session for that user, you don’t need to be root. For the purpose of this document I’m going to use oracle as the user to setup a VNC Session as this is the user I want use to install the software. However to start the VNC Service you must be root. As the root user run the following command: service vncserver start Starting VNC server: no displays configured                [  OK  ] Login to the Linux box as the user  you wan to install the Oracle software [oracle@lisa ~]$ Run the command to create a new VNC server instance for the oracle user: vncserver You will be ask to supply password information. This is what you will enter when connecting from your desktop client. This password is also independent of the actual Linux user password. The VNC Server is acting as a proxy to this instance. You will require a password to access your desktops. Password: Verify: xauth:  creating new authority file /home/oracle/.Xauthority New 'lisa.nz.oracle.com:1 (oracle)' desktop is lisa.nz.oracle.com:1 Creating default startup script /home/oracle/.vnc/xstartup Starting applications specified in /home/oracle/.vnc/xstartup Log file is /home/oracle/.vnc/lisa.nz.oracle.com:1.log As you can see a new instance lisa.nz.oracle.com:1 has been created. If you were to run the vncserver command again another instance lisa.nz.oracle.com:2 will be created. If you are going through a firewall you will need to ensure that the port 5901 (port 1) is open between your client desktop and the Linux Server. Depending on the options chosen at install time a firewall could be in place. The simplest way to disable this is using the command. You will need to be root. service iptables stop This will stop the firewall while you install. If you just want to add a port to the accepted lists use the firewall UI. You will need to be root. system-config-security-level Now you are ready to connect to the server via the VNC. Using the software installed in step one start the VNC Client. You should be prompted for the server and port. If connectivity is established, you will be prompted for the password entered in step 5. You should now be presented with a terminal screen ready to install software Go to the location of the oracle install software and start the Oracle Universal Installer

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Upgrading VSIX extensions from VS2012 to VS2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/27/upgrading-vsix-extensions-from-vs2012-to-vs2013.aspx  As consumers of your Visual Studio extensions start to move over to VS 2013, you will have to upgrade the Visual Studio extensions you build for Visual Studio 2012 to Visual Studio 2013 and republish to the Visual Studio extension gallery. Failing which, it will not be possible for your consumers to install and use your extensions on Visual Studio 2013.   Objective In this blog post, I’ll show you how simple it is to upgrade your Visual Studio 2012 extension to Visual Studio 2013. There aren’t any reported breaking changes between VS 2012 SDK and VS 2013 SDK, the upgrade usually involves, rebuilding the extension against VS 2013 SDK and updating the vsix manifest file.              Walkthrough Download the Visual Studio 2013 SDK - You will need to download the Visual Studio 2013 SDK in order to open up the Visual Studio extension project in Visual Studio 2013. The SDK can be downloaded from here. Install the SDK before you proceed.                2. Once the VS 2013 SDK has been installed, open up your package project. For the purposes of this blog post, I’ll open up the Avanade Extension – Software Inventory in Visual Studio 2013. You will notice that Visual Studio doesn’t load the project but let’s you know that the project needs to be Migrated.                  3. Right click the project and choose the option ‘Reload Project’ from the Context Menu.                  4. Choosing the Reload Project option brings up an upgrade window, telling you that the upgrade is a one way only upgrade i.e. the project will be changed to work with Visual Studio 2013 and you will not be able to open the project up in Visual Studio 2012. My recommendation would be to create a Visual Studio 2013 branch and upgrading the project in that branch only, so if you need to go back to Visual Studio 2012 project at some point, you have a handy reference in a separate branch.             5. Upon clicking Ok, the project is updated. See below, the following changes are made at the time of upgrade,           - The runtime version is updated in the Resources.Designer.cs file                      - The Minimum version of Visual Studio in the package project file is changed from 11.0 to 12.0                    6. Reference VS 2013 dll’s rather than VS 2012 dll’s. So reference Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.Controls.dll from C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0 and C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v4.5. If you have any other API references, then change the references to point to VS 2013 instead of VS 2012.                          7. Rebuild your solution to ensure there are no breaking changes. Success!                8. Update VSIX Manifest file (the file source.extnsion.vsixmanifest contains the meta data for your VSIX).          - Update the Install Targets from 11.0 to 12.0. This basically enforces that the extension can be installed on Visual Studio 2013 version of Visual Studio.                         - Update the Dependencies from Visual Studio MPF 11.0 to Visual Studio MPF 12.0              9. Rebuild the solution and open up the bin folder for the Package project and look for the file *.vsix file [Microsoft Visual Studio Extension].         - This is basically the installer for your extension.                 - Double click the installer to launch the installer wizard. Viola! You can see the package installation wizard opens up and gives you the option to install the extension for Visual Studio 2013.                    - Click Install to Continue                    - Note – If you run into the exception “23/06/2013 10:42:18 - Install Error : Microsoft.VisualStudio.ExtensionManager.InstallByMsiException: The InstalledByMSI element in extension Avanade Extensions cannot be 'true' when installing an extension through the Extensions and Updates Installer.  The element can only be 'true' when an MSI lays down the extension manifest file.” Ensure you have the option “This VSIX is installed by Windows Installer” unchecked in the Install Targets tab.        10. Verifying that the extension has installed correctly.           - Open Extension Manager and verify that the installed extension shows up in the extension manager “list of installed VSIX”.                      11. First Look at the updated Extension                         - The links have now been moved to the context menu, so to see the navigation links, you’ll have to right click on the icon and select the option from the context menu.                                        Note – The Avanade Extension being used in the demo has been developed by Utkarsh and Tarun. The Software Inventory Extension for Visual Studio 2012…  allows you to see the list of Software installed on the hosted build server right from with in Visual Studio,  the extension also allows you to export this list to excel. More details on how this has been implemented can be found here.   I hope you found this useful. In case you have any questions or feedback, feel free to reach out on Visual Studio extensibility MSDN forums or via Microsoft Visual Studio feedback forum. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • asynchrony is viral

    - by Daniel Moth
    It is becoming hard to write code today without introducing some form of asynchrony and, if you are using .NET (e.g. for Windows Phone 8 or Windows Store apps), that means sooner or later you have to await something and mark your method as async. My most recent examples included introducing speech recognition in my Translator By Moth phone app where I had to await mySpeechRecognizerUI.RecognizeWithUIAsync() and when moving that code base to a Windows Store project just to show a MessageBox I had to await myMessageDialog.ShowAsync(). Any time you need to invoke an asynchronous method in your code, you have a choice to make: kick off the operation but don’t wait for it to complete (otherwise known as fire-and-forget), synchronously wait for it to complete (which will entail blocking, which can be bad, especially on a UI thread), or asynchronously wait for it to complete before continuing on with the rest of the method’s work. In most cases, you want the latter, and the await keyword makes that trivial to implement.  When you use the magical await keyword in front of an API call, then you typically have to make additional changes to your code: This await usage is within a method of course, and now you have to annotate that method with async. Furthermore, you have to change the return type of the method you just annotated so it returns a Task (if it previously returned void), or Task<myOldReturnType> (if it previously returned myOldReturnType). Note that if it returns void, in some cases you could cheat and stop there. Furthermore, any method that called this method you just annotated with async will now also be invoking an asynchronous operation, so you have to make that change in the body of the caller method to introduce the await keyword before the call to the method. …you guessed it, you now have to change this caller method to be annotated with async and have its return types tweaked... …and it goes on virally… At some point you reach the root of your user code, e.g. a GUI event handler, and whoever calls that void method can already deal with the fact that you marked it as async and the viral introduction of the keywords stops there… This is all wonderful progress and a very powerful mechanism, and I just wish someone had written a refactoring tool to take care of this… anyone? I mentioned earlier that you have a choice when invoking an asynchronous operation. If the first time you encounter this you wish to localize the impact of all these changes and essentially try to turn the asynchronous behavior into synchronous by blocking - don't! For reasons why you don't want to do that, read Toub's excellent blog post (and check out the rest of his blog with gems on async programming starting with the Async FAQ). Just embrace the pattern knowing that when you use one instance of an await, you'll propagate the change all the way to the root user code method, e.g. typically an event handler. Related aside: I just finished re-writing my MessageBox wrapper class for Phone projects, including making it work in Windows Store projects, and it does expect you to use it with an await :-). I'll share that in an upcoming post for those of you that have the same need… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him.

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • F# WPF Form &ndash; the basics

    - by MarkPearl
    I was listening to Dot Net Rocks show #560 about F# and during the podcast Richard Campbell brought up a good point with regards to F# and a GUI. In essence what I understood his point to be was that until one could write an end to end application in F#, it would be a hard sell to developers to take it on. In part I agree with him, while I am beginning to really enjoy learning F#, I can’t but help feel that I would be a lot further into the language if I could do my Windows Forms like I do in C# or VB.NET for the simple reason that in “playing” applications I spend the majority of the time in the UI layer… So I have been keeping my eye out for some examples of creating a WPF form in a F# project and came across Tim’s F# Twitter Stream Sample, which had exactly this…. of course he actually had a bit more than a basic form… but it was enough for me to scrap the insides and glean what I needed. So today I am going to make just the very basic WPF form with all the goodness of a XAML window. Getting Started First thing we need to do is create a new solution with a blank F# application project – I have made mine called FSharpWPF. Once you have the project created you will need to change the project type from a Console Application to a Windows Application. You do this by right clicking on the project file and going to its properties… Once that is done you will need to add the appropriate references. You do this by right clicking on the References in the Solution Explorer and clicking “Add Reference'”. You should add the appropriate .Net references below for WPF & XAMl to work. Once these references are added you then need to add your XAML file to the project. You can do this by adding a new item to the project of type xml and simply changing the file extension from xml to xaml. Once the xaml file has been added to the project you will need to add valid window XAML. Example of a very basic xaml file is shown below… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF WPF Form" Height="350" Width="525"> <Grid> </Grid> </Window> Once your xaml file is done… you need to set the build action of the xaml file from “None” to “Resource” as depicted in the picture below. If you do not set this you will get an IOException error when running the completed project with a message along the lines of “Cannot locate resource ‘window.xaml’ You then need to tie everything up by putting the correct F# code in the Program.fs to load the xaml window. In the Program.fs put the following code… module Program open System open System.Collections.ObjectModel open System.IO open System.Windows open System.Windows.Controls open System.Windows.Markup [<STAThread>] [<EntryPoint>] let main(_) = let w = Application.LoadComponent(new System.Uri("/FSharpWPF;component/Window.xaml", System.UriKind.Relative)) :?> Window (new Application()).Run(w) Once all this is done you should be able to build and run your project. What you have done is created a WPF based window inside a FSharp project. It should look something like below…   Nothing to exciting, but sufficient to illustrate the very basic WPF form in F#. Hopefully in future posts I will build on this to expose button events etc.

    Read the article

  • jQuery Selector Tester and Cheat Sheet

    - by SGWellens
    I've always appreciated these tools: Expresso and XPath Builder. They make designing regular expressions and XPath selectors almost fun! Did I say fun? I meant less painful. Being able to paste/load text and then interactively play with the search criteria is infinitely better than the code/compile/run/test cycle. It's faster and you get a much better feel for how the expressions work. So, I decided to make my own interactive tool to test jQuery selectors:  jQuery Selector Tester.   Here's a sneak peek: Note: There are some existing tools you may like better: http://www.woods.iki.fi/interactive-jquery-tester.html http://www.w3schools.com/jquery/trysel.asp?filename=trysel_basic&jqsel=p.intro,%23choose My tool is different: It is one page. You can save it and run it locally without a Web Server. It shows the results as a list of iterated objects instead of highlighted html. A cheat sheet is on the same page as the tester which is handy. I couldn't upload an .htm or .html file to this site so I hosted it on my personal site here: jQuery Selector Tester. Design Highlights: To make the interactive search work, I added a hidden div to the page: <!--Hidden div holds DOM elements for jQuery to search--><div id="HiddenDiv" style="display: none"></div> When ready to search, the searchable html text is copied into the hidden div…this renders the DOM tree in the hidden div: // get the html to search, insert it to the hidden divvar Html = $("#TextAreaHTML").val();$("#HiddenDiv").html(Html); When doing a search, I modify the search pattern to look only in the HiddenDiv. To do that, I put a space between the patterns.  The space is the Ancestor operator (see the Cheat Sheet): // modify search string to only search in our// hidden div and do the searchvar SearchString = "#HiddenDiv " + SearchPattern;try{    var $FoundItems = $(SearchString);}   Big Fat Stinking Faux Pas: I was about to publish this article when I made a big mistake: I tested the tool with Mozilla FireFox. It blowed up…it blowed up real good. In the past I’ve only had to target IE so this was quite a revelation. When I started to learn JavaScript, I was disgusted to see all the browser dependent code. Who wants to spend their time testing against different browsers and versions of browsers? Adding a bunch of ‘if-else’ code is a tedious and thankless task. I avoided client code as much as I could. Then jQuery came along and all was good. It was browser independent and freed us from the tedium of worrying about version N of the Acme browser. Right? Wrong! I had used outerHTML to display the selected elements. The problem is Mozilla FireFox doesn’t implement outerHTML. I replaced this: // encode the html markupvar OuterHtml = $('<div/>').text(this.outerHTML).html(); With this: // encode the html markupvar Html = $('<div>').append(this).html();var OuterHtml = $('<div/>').text(Html).html(); Another problem was that Mozilla FireFox doesn’t implement srcElement. I replaced this: var Row = e.srcElement.parentNode;  With this: var Row = e.target.parentNode; Another problem was the indexing. The browsers have different ways of indexing. I replaced this: // this cell has the search pattern  var Cell = Row.childNodes[1];   // put the pattern in the search box and search                    $("#TextSearchPattern").val(Cell.innerText);  With this: // get the correct cell and the text in the cell// place the text in the seach box and serachvar Cell = $(Row).find("TD:nth-child(2)");var CellText = Cell.text();$("#TextSearchPattern").val(CellText);   So much for the myth of browser independence. Was I overly optimistic and gullible? I don’t think so. And when I get my millions from the deposed Nigerian prince I sent money to, you’ll see that having faith is not futile. Notes: My goal was to have a single standalone file. I tried to keep the features and CSS to a minimum–adding only enough to make it useful and visually pleasing. When testing, I often thought there was a problem with the jQuery selector. Invariable it was invalid html code. If your results aren't what you expect, don't assume it's the jQuery selector pattern: The html may be invalid. To help in development and testing, I added a double-click handler to the rows in the Cheat Sheet table. If you double-click a row, the search pattern is put in the search box, a search is performed and the page is scrolled so you can see the results. I left the test html and code in the page. If you are using a CDN (non-local) version of the jQuery libraray, the designer in Visual Studio becomes extremely slow.  That's why there are two version of the library in the header and one is commented out. For reference, here is the jQuery documentation on selectors: http://api.jquery.com/category/selectors/ Here is a much more comprehensive list of CSS selectors (which jQuery uses): http://www.w3.org/TR/CSS2/selector.html I hope someone finds this useful. Steve WellensCodeProject

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Review: A Quick Look at Reflector

    - by James Michael Hare
    I, like many, was disappointed when I heard that Reflector 7 was not free, and perhaps that’s why I waited so long to try it and just kept using my version 6 (which continues to be free).  But though I resisted for so long, I longed for the better features that were being developed, and began to wonder if I should upgrade.  Thus, I began to look into the features being offered in Reflector 7.5 to see what was new. Multiple Editions Reflector 7.5 comes in three flavors, each building on the features of the previous version: Standard – Contains just the Standalone application ($70) VS – Same as Standard but adds Reflector Object Browser for Visual Studio ($130) VSPro – Same as VS but adds ability to set breakpoints and step into decompiled code ($190) So let’s examine each of these features. The Standalone Application (Standard, VS, VSPro editions) Popping open Reflector 7.5 and looking at the GUI, we see much of the same familiar features, with a few new ones as well: Most notably, the disassembler window now has a tabbed window with navigation buttons.  This makes it much easier to back out of a deep-dive into many layers of decompiled code back to a previous point. Also, there is now an analyzer which can be used to determine dependencies for a given method, property, type, etc. For example, if we select System.Net.Sockets.TcpClient and hit the Analyze button, we’d see a window with the following nodes we could expand: This gives us the ability to see what a given type uses, what uses it, who exposes it, and who instantiates it. Now obviously, for low-level types (like DateTime) this list would be enormous, but this can give a lot of information on how a given type is connected to the larger code ecosystem. One of the other things I like about using Reflector 7.5 is that it does a much better job of displaying iterator blocks than Reflector 6 did. For example, if you were to take a look at the Enumerable.Cast() extension method in System.Linq, and dive into the CastIterator in Reflector 6, you’d see this: But now, in Reflector 7.5, we see the iterator logic much more clearly: This is a big improvement in the quality of their code disassembler and for me was one of the main reasons I decided to take the plunge and get version 7.5. The Reflector Object Browser (VS, VSPro editions) If you have the .NET Reflector VS or VSPro editions, you’ll find you have in Visual Studio a Reflector Object Browser window available where you can select and decompile any assembly right in Visual Studio. For example, if you want to take a peek at how System.Collections.Generic.List<T> works, you can either select List<T> in the Reflector Object Browser, or even simpler just select a usage of it in your code and CTRL + Click to dive in. – And it takes you right to a source window with the decompiled source: Setting Breakpoints and Stepping Into Decompiled Code (VSPro) If you have the VSPro edition, in addition to all the things said above, you also get the additional ability to set breakpoints in this decompiled code and step through it as if it were your own code: This can be a handy feature when you need to see why your code’s use of a BCL or other third-party library isn’t working as you expect. Summary Yes, Reflector is no longer free, and yes, that’s a bit of a bummer. But it always was and still is a very fine tool. If you still have Reflector 6, you aren’t forced to upgrade any longer, but getting the nicer disassembler (especially for iterator blocks) and the handy VS integration is worth at least considering upgrading for.  So I leave it up to you, these are some of the features of Reflector 7.5, what’s your thoughts? Technorati Tags: .NET,Reflector

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >