Search Results

Search found 46973 results on 1879 pages for 'return path'.

Page 1060/1879 | < Previous Page | 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067  | Next Page >

  • vir-install virtual machine hang on Probbing EDD

    - by user2256235
    I'm using vir-stall virtual machine, and my command is virt-install --name=gust --vcpus=4 --ram=8192 --network bridge:br0 --cdrom=/opt/rhel-server-6.2-x86_64-dvd.iso --disk path=/opt/as1/as1.img,size=50 --accelerate After running the command, it hangs on probing EDD, - Press the <ENTER> key to begin the installation process. +----------------------------------------------------------+ | Welcome to Red Hat Enterprise Linux 6.2! | |----------------------------------------------------------| | Install or upgrade an existing system | | Install system with basic video driver | | Rescue installed system | | Boot from local drive | | Memory test | | | | | | | | | | | | | | | +----------------------------------------------------------+ Press [Tab] to edit options Automatic boot in 57 seconds... Loading vmlinuz...... Loading initrd.img...............................ready. Probing EDD (edd=off to disable)... ok ÿ Previously, I wait a long time, it seems no marching. After I press ctrl + ] and stop it. I find it was running using virsh list, but I cannot console it using virsh concole gust. Any problem and how should I do. Many Thanks

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Creating a Cross-Process EventWaitHandle c#

    - by Navaneeth
    Hello all, I have two windows application, one is a windows service which create EventWaitHandle and wait for it .Second application is a windows gui which open it by calling EventWaitHandle.OpenExisting() and try to Set the event .But I am getting an exception in OpenExisting .The Exception is "Access to the path is denied". windows Service code EventWaitHandle wh = new EventWaitHandle(false, EventResetMode.AutoReset, "MyEventName"); wh.WaitOne(); Windows GUI code try { EventWaitHandle wh = EventWaitHandle.OpenExisting("MyEventName"); wh.Set(); } catch (Exception ex) { MessageBox.Show(ex.Message); } I tried the same code with two sample console application ,it was working fine. Looking forward to hearing from you

    Read the article

  • SqlCeException: The column cannot be modified. [ Column name = id ]

    - by pek
    I am simply trying to add a single row in the database but I keep getting an exception. I created a local database and added a single table: users. It consists of two columns: "id" and "name". I only made the id primary key (not auto-increment or anything else). When I run the following code: string execPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase); string dbfile = execPath + @"\LocalDatabase.sdf"; SqlCeConnection conn = new SqlCeConnection("datasource=" + dbfile); conn.Open(); string command = "INSERT INTO users VALUES('1','pek')"; Debug.WriteLine(command); SqlCeCommand comm = conn.CreateCommand(); comm.CommandText = command; comm.ExecuteNonQuery(); I get the following Exception at "comm.ExecuteNonQuery();": SqlCeException was unhandled The column cannot be modified. [ Column name = id ] What's with the "modified" part?

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Base64.encodeBase64URLSafeString() could not find method error in eclipse (Android project).

    - by jax
    I have an Android project that is using the Base64.encodeBase64URLSafeString commons method. The part that does the Base64 is in another java project. I have added the java project to the android project through the "Project" tab in the Build Path. I have already linked both projects to commons-codec thinking that this might be the problem but am still getting the following error in Eclipse. Both project have no errors. Could not find method org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString, referenced from method com.mydomain.android.licensegenerator.client.LicenseLoader.doSha1AndBase64Encryption What might I be doing wrong?

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

  • Command /usr/bin/ codesign failed with exit code 1

    - by sarmenhba
    i was playing with the keychain certificates i wanted to remove them all and do it all again so that i learn how to do it. i have a simple app it worked perfectly before i started playing with the certificates so the code is fine. when i tried to compile my app to sent it to my iphone device it would give the error you see in the title. i looked at the log and here is what i see: Build Untitled of project Untitled with configuration Debug CodeSign build/Debug-iphoneos/Untitled.app cd /Users/sarmenhb/Desktop/myapp/Untitled setenv IGNORE_CODESIGN_ALLOCATE_RADAR_7181968 /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /usr/bin/codesign -f -s "iPhone Developer: sarm bo (2ZDTN5FTAL)" --resource-rules=/Users/sarmenhb/Desktop/myapp/Untitled/build/Debug-iphoneos/Untitled.app/ResourceRules.plist --entitlements /Users/sarmenhb/Desktop/myapp/Untitled/build/Untitled.build/Debug-iphoneos/Untitled.build/Untitled.xcent /Users/sarmenhb/Desktop/myapp/Untitled/build/Debug-iphoneos/Untitled.app iPhone Developer: sarm bo (2ZDTN5FTAL): no identity found Command /usr/bin/codesign failed with exit code 1 how do i fix this?

    Read the article

  • WPF Combobox textbox not updating when binding changes.

    - by WillH
    I have a WPF CombBox as follows: <ComboBox ItemsSource="{Binding Source={StaticResource myList}}" SelectedItem="{Binding Path=mySelectedItem}" /> The problem I have is that when the bound value changes, the selected value in the combobox's textbox does not update. (Note - the values in the combobox list do update). I am using MVVM so I can detect in the view model when the binding changes and call a property changed event and this is updating the combobox, but not the value displayed within the textbox. I think this could be done in the template of the combobox - somehow make the textbox be bound to the selecteditem of the combobox, or always update when it updates? I don't know how to do this though so any advice would be most appreciated.

    Read the article

  • 1>Project : error PRJ0003 : Error spawning 'rc.exe'.

    - by user320950
    1Project : error PRJ0003 : Error spawning 'rc.exe'.. this is the error i get when i try to run this small practice program of reading and writing files which i cant do because of the reason of me not being able to get the files to open correctly. i use microsoft visual c++ 2008 and i have used the file path to try to open the file as well and i cant can someone help? #include <iostream> #include <fstream> using namespace std; int main () { ifstream infile; ofstream myfile; infile.open("ex.txt"); myfile.open ("example.txt"); myfile.close(); return 0; }

    Read the article

  • Distributing IronPython applications - how to detect the location of ipyw.exe

    - by Kragen
    I'm thinking of developing a small application using Iron python, however I want to distribute my app to non-techies and so ideally I want to be able to give them a standard shortcut to my application along with the instructions that they need to install IronPython first. If possible I even want my shortcut to detect if IronPython is not present and display a suitable warning if this is the case (which I can do using a simple VbScript) The trouble is that IronPython doesn't place itself in the %PATH% environment variable, and so if IronPython is installed to a nonstandard location my shortcut don't work. Now I could also tell my users "If you install IronPython to a different location you need to go and edit this shortcut and...", but this is all getting far too technical for my target audience. Is there any foolproof way of distributing my IronPython dependent app?

    Read the article

  • C# How to get the current project directory or the bin directory and move a few level up?

    - by melaos
    Hi there, I have an ASP.Net MVC app, and i have some xsl files inside of the Content directory. I've try a few methods to get directory dynamically buy keep on coming short. So how do i get the directory to point to the Content/xsl folder? the closest that i came to was with this: this.GetType().Assembly.CodeBase which only returns the project DLL, but i can't figure out how to move up a few levels from there or what .net library to use to navigate around the path. there's no ../.. :( Basically i want to navigate to the Content/xsl folder which is at the same level of the Bin directory. Any idea? thanks.

    Read the article

  • Using Simple XML and getting NoClassDefFoundError in Android

    - by Berat Onur Ersen
    I'm trying to use Simple XML to convert my java objects to XML format in my Android application. I'm getting NoClassDefFoundError at line Serializer serializer = new Persister(); java.lang.NoClassDefFoundError: org.simpleframework.xml.core.Persister I have simple-xml-2.6.1.jar in project class path and when I got NoClassDefFoundError I also put these 3 jars in classpath stax-1.2.0.jar stax-api-1.0.1.jar xpp3-1.1.3_8.jar but made no use. Still having NoClassDefFoundError. Any kind of help will be appreciated.Thank you.

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • Unable to attach "AdventureWorks2008" Sample Database to a named Instance in SQL Server 2008

    - by uzorick
    First of all "Northwind" and "AdventureWorksDW2008" databases attached without problem, but "AdventureWorks2008" fails with the following error. // Msg 5120, Level 16, State 105, Line 1 Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Documents". Operating system error 2: "2(The system cannot find the file specified.)". Msg 5105, Level 16, State 14, Line 1 A file activation error occurred. The physical file name 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Documents' may be incorrect. Diagnose and correct additional errors, and retry the operation. Msg 1813, Level 16, State 2, Line 1 Could not open new database 'AdventureWorks2008'. CREATE DATABASE is aborted. // PS: I did not use the default database instance "MSSQLSERVER" during install, so Where is it finding this path "C:...\MSSQL10.MSSQLSERVER...\Documents"?

    Read the article

  • WPF DataGrid DataGridHyperlinkColumn bound to Uri

    - by MicMit
    No problem when binding to a property of string type ( "http://something.com" ). However , I seem to have seen in old examples direct binding to Uri property. <dg:DataGridHyperlinkColumn IsReadOnly="True" Header="Uri" Binding="{Binding Path=NavigURI}" /> NavigURI is Uri . More recent docs seem to require a converter <DataGridHyperlinkColumn Header="Email" Binding="{Binding Email}" ContentBinding="{Binding Email, Converter={StaticResource EmailConverter}}" /> I tried with a converter also, but in both cases with or without converter column is empty. Debugging showed that value passed to "Convert" method is always null. My question : if for any reason I want binding to Uri property , is it feasible for the latest DataGrid from Codeplex ?

    Read the article

  • Fancy dynamic list in Android: TableLayout vs ListView

    - by Ralkie
    There is a requirement to have not-so-trivial dynamic list, each record of which consists of several columns (texts, buttons). It should look something like: Text11 Text12 Button1 Button2 Text21 Text22 Button1 Button2 ... At first obvious way to accomplish that seemed to be TableLayout. I was expecting to have layout/styling data specified in res/layout/*.xml and to populate it with some dataset from java code (as with ListView, for which its possible to specify TextView of item in *.xml and bind it to some array using ArrayAdapter). But after playing for a while, all I found to be possible is fully populating TableLayout programatically. Still, creating TableRow by TableRow and setting layout attributes directly in java code doesn't seem elegant enough. So the questions are: Am I at the right path? Is TableLayout really best View to accomplish that? Maybe it's more appropriate to extend ListView or something else to meet such requirements? Is it possible to have TableLayout/TableRow template specified in *.xml and data bound to this template in java side?

    Read the article

  • SQL Server 2008 to SQL Server 2008

    - by Sakhawat Ali
    I have an MDF and LDF file of SQL Server 2005. i attached it with SQL Server 2008 and did some change in data. now when i attached it back to sql server 2005 Express Edition it gives version error. The database 'E:\DB\JOBPERS.MDF' cannot be opened because it is version 655. This server supports version 612 and earlier. A downgrade path is not supported. Could not open new database 'E:\DB\JOBPERS.MDF'. CREATE DATABASE is aborted. An attempt to attach an auto-named database for file E:\DB\Jobpers.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.

    Read the article

  • Ndk-build: CreateProcess: make (e=87): The parameter is incorrect

    - by user1514958
    I get an error when build static lib with NDK on Windows platform: process_begin: CreateProcess( "PATH"\android-ndk-r8b\toolchains\arm-linux-androideabi-4.6\prebuilt\windows\bin\arm-linux-androideabi-ar.exe, "some other commands" ) failed. make (e=87): The parameter is incorrect. make: *** [obj/local/armeabi-v7a/staticlib.a] Error 87 make: *** Waiting for unfinished jobs.... All source files build successfully, and this error occur when compose object files. I don't get this error when build this project in Ubuntu, it occur only on Windows. I suppose I found the issue: second parameter of CreateProcess Win API function lpCommandLine has max length 32,768 characters. But in my case it is more than 32,768 characters. How I can solve this issue?

    Read the article

  • Android "application stopped unexpectedly" - google Hello MapView Tutoria

    - by Cookie
    Hi, I'm trying the Hello MapView Tutorial at the moment. Whe I launch the program in the emulator, I get a huge number of errors (none of the exceptions seems to be related with lines in my code). The emulator window tells the program "stopped unexpectedly". Can anybody tell me which is the key line in the error output? What do I have to change? 05-02 15:04:57.195: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 05-02 15:04:57.195: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 05-02 15:04:57.195: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 05-02 15:04:57.195: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 05-02 15:05:10.659: ERROR/MemoryHeapBase(51): error opening /dev/pmem: No such file or directory 05-02 15:05:10.659: ERROR/SurfaceFlinger(51): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 05-02 15:05:10.699: ERROR/libEGL(51): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 05-02 15:05:11.403: ERROR/libEGL(62): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 05-02 15:05:14.775: ERROR/BatteryService(51): Could not open '/sys/class/power_supply/usb/online' 05-02 15:05:14.775: ERROR/BatteryService(51): Could not open '/sys/class/power_supply/battery/batt_vol' 05-02 15:05:14.775: ERROR/BatteryService(51): Could not open '/sys/class/power_supply/battery/batt_temp' 05-02 15:05:15.148: ERROR/EventHub(51): could not get driver version for /dev/input/mouse0, Not a typewriter 05-02 15:05:15.148: ERROR/EventHub(51): could not get driver version for /dev/input/mice, Not a typewriter 05-02 15:05:15.282: ERROR/System(51): Failure starting core service 05-02 15:05:15.282: ERROR/System(51): java.lang.SecurityException 05-02 15:05:15.282: ERROR/System(51): at android.os.BinderProxy.transact(Native Method) 05-02 15:05:15.282: ERROR/System(51): at android.os.ServiceManagerProxy.addService(ServiceManagerNative.java:146) 05-02 15:05:15.282: ERROR/System(51): at android.os.ServiceManager.addService(ServiceManager.java:72) 05-02 15:05:15.282: ERROR/System(51): at com.android.server.ServerThread.run(SystemServer.java:162) 05-02 15:05:15.302: ERROR/AndroidRuntime(51): Crash logging skipped, no checkin service 05-02 15:05:17.012: ERROR/LockPatternKeyguardView(51): Failed to bind to GLS while checking for account 05-02 15:05:21.795: ERROR/ActivityThread(100): Failed to find provider info for com.google.settings 05-02 15:05:21.819: ERROR/ActivityThread(100): Failed to find provider info for com.google.settings 05-02 15:05:25.872: ERROR/ApplicationContext(51): Couldn't create directory for SharedPreferences file shared_prefs/wallpaper-hints.xml 05-02 15:05:28.923: ERROR/vold(26): Cannot start volume '/sdcard' (volume is not bound) 05-02 15:05:26.879: ERROR/ActivityThread(97): Failed to find provider info for android.server.checkin 05-02 15:05:30.211: ERROR/ActivityThread(97): Failed to find provider info for android.server.checkin 05-02 15:05:30.430: ERROR/ActivityThread(97): Failed to find provider info for android.server.checkin 05-02 15:05:32.463: ERROR/MediaPlayerService(30): Couldn't open fd for content://settings/system/notification_sound 05-02 15:05:32.489: ERROR/MediaPlayer(51): Unable to to create media player 05-02 15:05:34.783: ERROR/ActivityThread(51): Failed to find provider info for com.google.settings 05-02 15:05:34.783: ERROR/ActivityThread(51): Failed to find provider info for com.google.settings 05-02 15:05:35.359: ERROR/AndroidRuntime(201): Uncaught handler: thread main exiting due to uncaught exception 05-02 15:05:35.395: ERROR/AndroidRuntime(201): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{org.diretto.client.smartphone.android/org.diretto.client.smartphone.android.ShowMap}: java.lang.ClassNotFoundException: org.diretto.client.smartphone.android.ShowMap in loader dalvik.system.PathClassLoader@4376af90 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2324) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.os.Handler.dispatchMessage(Handler.java:99) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.os.Looper.loop(Looper.java:123) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.ActivityThread.main(ActivityThread.java:4203) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invokeNative(Native Method) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invoke(Method.java:521) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at dalvik.system.NativeStart.main(Native Method) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): Caused by: java.lang.ClassNotFoundException: org.diretto.client.smartphone.android.ShowMap in loader dalvik.system.PathClassLoader@4376af90 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:243) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at java.lang.ClassLoader.loadClass(ClassLoader.java:573) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at java.lang.ClassLoader.loadClass(ClassLoader.java:532) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.Instrumentation.newActivity(Instrumentation.java:1097) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2316) 05-02 15:05:35.395: ERROR/AndroidRuntime(201): ... 11 more 05-02 15:05:35.527: ERROR/dalvikvm(201): Unable to open stack trace file '/data/anr/traces.txt': Permission denied

    Read the article

  • MSBuild cannot find SGen when compiling a solution

    - by Jaxidian
    I've looked at several other SGen-related questions on here and either their answers don't apply or their answers don't fix this for me. I have installed several SDKs to fix this issue with no luck. Reference types should not be changed since this is the only place this is a problem. Once suggestion is to put SGen.exe into the C:\Windows\Microsoft.NET\Framework\v3.5 folder, but that's not been done on the box where this is not a problem. In this scenario, SGen.exe actually exists and is right where it's supposed to be, but MSBuild still is having issues with finding it for some reason! Background: We have a NAnt script that automates our builds. In this scenario, NAnt is calling MSBuild and MSBuild is generating the error claiming to be unable to find SGen. The project is .NET 3.5-based. I have my primary dev environment (64-bit Vista Ultimate) where the script works perfectly and I am attempting to duplicate it in a VM (64-bit Win 7 Ultimate). I THINK I have everything to the point where I should be good-to-go but this fails on the Win7 box (works perfectly on the Vista box). I have done some comparisons between the two boxes and they both look identical in this regard, but it still fails. For example, the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework's sdkInstallRootv2.0 value is set to C:\Program Files\Microsoft.NET\SDK\v2.0 64bit\ on both machines. In both machines, SGen.exe is in that path's bin subdirectory. NAnt Script: <target name="report-installer" depends="fail-if-environment-not-set"> <exec program="MSBuild.exe" basedir="${framework35.directory}"> <arg value="${tools.directory.current}\ReportInstaller\ReportInstaller.sln" /> <arg value="/p:Configuration=${buildconfiguration.current}" /> </exec> </target> The error message I get is this: report-installer: [exec] Microsoft (R) Build Engine Version 3.5.30729.4926 [exec] [Microsoft .NET Framework, Version 2.0.50727.4927] [exec] Copyright (C) Microsoft Corporation 2007. All rights reserved. [exec] [exec] Build started 4/8/2010 11:28:23 AM. [exec] Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" on node 0 (default targets). [exec] Building solution configuration "Release|Any CPU". [exec] Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (1) is building "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (2) on node 0 (default targets). [exec] Could not locate the .NET Framework SDK. The task is looking for the path to the .NET Framework SDK at the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK. 2.) Manually set the above registry key to the correct location. [exec] CoreCompile: [exec] Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. [exec] C:\Windows\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB3091: Task failed because "sgen.exe" was not found, or the .NET Framework SDK v2.0 is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK v2.0. 2.) Manually set the above registry key to the correct location. 3.) Pass the correct location into the "ToolPath" parameter of the task. [exec] Done Building Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (default targets) -- FAILED. [exec] Done Building Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (default target) (1) -> [exec] "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (default target) (2) -> [exec] (GenerateSerializationAssemblies target) -> [exec] C:\Windows\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB3091: Task failed because "sgen.exe" was not found, or the .NET Framework SDK v2.0 is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK v2.0. 2.) Manually set the above registry key to the correct location. 3.) Pass the correct location into the "ToolPath" parameter of the task. [exec] [exec] 0 Warning(s) [exec] 1 Error(s) [exec] [exec] Time Elapsed 00:00:00.24 [call] C:\Projects\Production\Source\reports.build(15,4): [call] External Program Failed: C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe (return code was 1) What am I doing wrong here that is causing MSBuild to STILL be unable to find SGen?

    Read the article

  • Validation in Silverlight

    - by Timmy Kokke
    Getting started with the basics Validation in Silverlight can get very complex pretty easy. The DataGrid control is the only control that does data validation automatically, but often you want to validate your own entry form. Values a user may enter in this form can be restricted by the customer and have to fit an exact fit to a list of requirements or you just want to prevent problems when saving the data to the database. Showing a message to the user when a value is entered is pretty straight forward as I’ll show you in the following example.     This (default) Silverlight textbox is data-bound to a simple data class. It has to be bound in “Two-way” mode to be sure the source value is updated when the target value changes. The INotifyPropertyChanged interface must be implemented by the data class to get the notification system to work. When the property changes a simple check is performed and when it doesn’t match some criteria an ValidationException is thrown. The ValidatesOnExceptions binding attribute is set to True to tell the textbox it should handle the thrown ValidationException. Let’s have a look at some code now. The xaml should contain something like below. The most important part is inside the binding. In this case the Text property is bound to the “Name” property in TwoWay mode. It is also told to validate on exceptions. This property is false by default.   <StackPanel Orientation="Horizontal"> <TextBox Width="150" x:Name="Name" Text="{Binding Path=Name, Mode=TwoWay, ValidatesOnExceptions=True}"/> <TextBlock Text="Name"/> </StackPanel>   The data class in this first example is a very simplified person class with only one property: string Name. The INotifyPropertyChanged interface is implemented and the PropertyChanged event is fired when the Name property changes. When the property changes a check is performed to see if the new string is null or empty. If this is the case a ValidationException is thrown explaining that the entered value is invalid.   public class PersonData:INotifyPropertyChanged { private string _name; public string Name { get { return _name; } set { if (_name != value) { if(string.IsNullOrEmpty(value)) throw new ValidationException("Name is required"); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } } public event PropertyChangedEventHandler PropertyChanged=delegate { }; } The last thing that has to be done is letting binding an instance of the PersonData class to the DataContext of the control. This is done in the code behind file. public partial class Demo1 : UserControl { public Demo1() { InitializeComponent(); this.DataContext = new PersonData() {Name = "Johnny Walker"}; } }   Error Summary In many cases you would have more than one entry control. A summary of errors would be nice in such case. With a few changes to the xaml an error summary, like below, can be added.           First, add a namespace to the xaml so the control can be used. Add the following line to the header of the .xaml file. xmlns:Controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.Input"   Next, add the control to the layout. To get the result as in the image showed earlier, add the control right above the StackPanel from the first example. It’s got a small margin to separate it from the textbox a little.   <Controls:ValidationSummary Margin="8"/>   The ValidationSummary control has to be notified that an ValidationException occurred. This can be done with a small change to the xaml too. Add the NotifyOnValidationError to the binding expression. By default this value is set to false, so nothing would be notified. Set the property to true to get it to work.   <TextBox Width="150" x:Name="Name" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True}"/>   Data annotation Validating data in the setter is one option, but not my personal favorite. It’s the easiest way if you have a single required value you want to check, but often you want to validate more. Besides, I don’t consider it best practice to write logic in setters. The way used by frameworks like WCF Ria Services is the use of attributes on the properties. Instead of throwing exceptions you have to call the static method ValidateProperty on the Validator class. This call stays always the same for a particular property, not even when you change the attributes on the property. To mark a property “Required” you can use the RequiredAttribute. This is what the Name property is going to look like:   [Required] public string Name { get { return _name; } set { if (_name != value) { Validator.ValidateProperty(value, new ValidationContext(this, null, null){ MemberName = "Name" }); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } }   The ValidateProperty method takes the new value for the property and an instance of ValidationContext. The properties passed to the constructor of the ValidationContextclass are very straight forward. This part is the same every time. The only thing that changes is the MemberName property of the ValidationContext. Property has to hold the name of the property you want to validate. It’s the same value you provide the PropertyChangedEventArgs with. The System.ComponentModel.DataAnnotation contains eight different validation attributes including a base class to create your own. They are: RequiredAttribute Specifies that a value must be provided. RangeAttribute The provide value must fall in the specified range. RegularExpressionAttribute Validates is the value matches the regular expression. StringLengthAttribute Checks if the number of characters in a string falls between a minimum and maximum amount. CustomValidationAttribute Use a custom method to validate the value. DataTypeAttribute Specify a data type using an enum or a custom data type. EnumDataTypeAttribute Makes sure the value is found in a enum. ValidationAttribute A base class for custom validation attributes All of these will ensure that an validation exception is thrown, except the DataTypeAttribute. This attribute is used to provide some additional information about the property. You can use this information in your own code.   [Required] [Range(0,125,ErrorMessage = "Value is not a valid age")] public int Age {   It’s no problem to stack different validation attributes together. For example, when an Age is required and must fall in the range from 0 to 125:   [Required, StringLength(255,MinimumLength = 3)] public string Name {   Or in one row like this, for a required Name with at least 3 characters and a maximum of 255:   Delayed validation Having properties marked as required can be very useful. The only downside to the technique described earlier is that you have to change the value in order to get it validated. What if you start out with empty an empty entry form? All fields are empty and thus won’t be validated. With this small trick you can validate at the moment the user click the submit button.   <TextBox Width="150" x:Name="NameField" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True, UpdateSourceTrigger=Explicit}"/>   By default, when a TwoWay bound control looses focus the value is updated. When you added validation like I’ve shown you earlier, the value is validated. To overcome this, you have to tell the binding update explicitly by setting the UpdateSourceTrigger binding property to Explicit:   private void SubmitButtonClick(object sender, RoutedEventArgs e) { NameField.GetBindingExpression(TextBox.TextProperty).UpdateSource(); }   This way, the binding is in two direction but the source is only updated, thus validated, when you tell it to. In the code behind you have to call the UpdateSource method on the binding expression, which you can get from the TextBox.   Conclusion Data validation is something you’ll probably want on almost every entry form. I always thought it was hard to do, but it wasn’t. If you can throw an exception you can do validation. If you want to know anything more in depth about something I talked about in this article let me know. I might write an entire post to that.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

< Previous Page | 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067  | Next Page >