Search Results

Search found 46973 results on 1879 pages for 'return path'.

Page 401/1879 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • C# async and actors

    - by Alex.Davies
    If you read my last post about async, you might be wondering what drove me to write such odd code in the first place. The short answer is that .NET Demon is written using NAct Actors. Actors are an old idea, which I believe deserve a renaissance under C# 5. The idea is to isolate each stateful object so that only one thread has access to its state at any point in time. That much should be familiar, it's equivalent to traditional lock-based synchronization. The different part is that actors pass "messages" to each other rather than calling a method and waiting for it to return. By doing that, each thread can only ever be holding one lock. This completely eliminates deadlocks, my least favourite concurrency problem. Most people who use actors take this quite literally, and there are plenty of frameworks which help you to create message classes and loops which can receive the messages, inspect what type of message they are, and process them accordingly. But I write C# for a reason. Do I really have to choose between using actors and everything I love about object orientation in C#? Type safety Interfaces Inheritance Generics As it turns out, no. You don't need to choose between messages and method calls. A method call makes a perfectly good message, as long as you don't wait for it to return. This is where asynchonous methods come in. I have used NAct for a while to wrap my objects in a proxy layer. As long as I followed the rule that methods must always return void, NAct queued up the call for later, and immediately released my thread. When I needed to get information out of other actors, I could use EventHandlers and callbacks (continuation passing style, for any CS geeks reading), and NAct would call me back in my isolated thread without blocking the actor that raised the event. Using callbacks looks horrible though. To remind you: m_BuildControl.FilterEnabledForBuilding(    projects,    enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(        enabledProjects,             newDirtyProjects =             {                 ....... Which is why I'm really happy that NAct now supports async methods. Now, methods are allowed to return Task rather than just void. I can await those methods, and C# 5 will turn the rest of my method into a continuation for me. NAct will run the other method in the other actor's context, but will make sure that when my method resumes, we're back in my context. Neither actor was ever blocked waiting for the other one. Apart from when they were actually busy doing something, they were responsive to concurrent messages from other sources. To be fair, you could use async methods with lock statements to achieve exactly the same thing, but it's ugly. Here's a realistic example of an object that has a queue of data that gets passed to another object to be processed: class QueueProcessor {    private readonly ItemProcessor m_ItemProcessor = ...     private readonly object m_Sync = new object();    private Queue<object> m_DataQueue = ...    private List<object> m_Results = ...     public async Task ProcessOne() {         object data = null;         lock (m_Sync)         {             data = m_DataQueue.Dequeue();         }         var processedData = await m_ItemProcessor.ProcessData(data); lock (m_Sync)         {             m_Results.Add(processedData);         }     } } We needed to write two lock blocks, one to get the data to process, one to store the result. The worrying part is how easily we could have forgotten one of the locks. Compare that to the version using NAct: class QueueProcessorActor : IActor { private readonly ItemProcessor m_ItemProcessor = ... private Queue<object> m_DataQueue = ... private List<object> m_Results = ... public async Task ProcessOne()     {         // We are an actor, it's always thread-safe to access our private fields         var data = m_DataQueue.Dequeue();         var processedData = await m_ItemProcessor.ProcessData(data);         m_Results.Add(processedData);     } } You don't have to explicitly lock anywhere, NAct ensures that your code will only ever run on one thread, because it's an actor. Either way, async is definitely better than traditional synchronous code. Here's a diagram of what a typical synchronous implementation might do: The left side shows what is running on the thread that has the lock required to access the QueueProcessor's data. The red section is where that lock is held, but doesn't need to be. Contrast that with the async version we wrote above: Here, the lock is released in the middle. The QueueProcessor is free to do something else. Most importantly, even if the ItemProcessor sometimes calls the QueueProcessor, they can never deadlock waiting for each other. So I thoroughly recommend you use async for all code that has to wait a while for things. And if you find yourself writing lots of lock statements, think about using actors as well. Using actors and async together really takes the misery out of concurrent programming.

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard Error

    - by Stuart Brierley
    When attempting to run the BizTalk Server Pipeline Component Wizard for the first time I encountered an error that prevented the creation of the pipeline component project: System.ArgumentException : Value does not fall withing the expected range  I found the solution for this error in a couple of places, first a reference to the issue on the Codeplex Project and then a fuller description on a blog referring to the BizTalk 2006 implementation. To resolve this issue you need to make a change to the downloaded wizard solution code, by changing the file BizTalkPipelineComponentWizard.cs around line 304 From // get a handle to the project. using mySolution will not work. pipelineComponentProject = this._Application.Solution.Projects.Item(Path.Combine(Path.GetFileNameWithoutExtension(projectFileName), projectFileName)); To // get a handle to the project. using mySolution will not work. if (this._Application.Solution.Projects.Count == 1) pipelineComponentProject = this._Application.Solution.Projects.Item(1); else { // In Doubt: Which project is the target? pipelineComponentProject = this._Application.Solution.Projects.Item(projectFileName); } Following this you need to uninstall the the wizard, rebuild the solution and re-install the wizard again.

    Read the article

  • Subterranean IL: Exception handler semantics

    - by Simon Cooper
    In my blog posts on fault and filter exception handlers, I said that the same behaviour could be replicated using normal catch blocks. Well, that isn't entirely true... Changing the handler semantics Consider the following: .try { .try { .try { newobj instance void [mscorlib]System.Exception::.ctor() // IL for: // e.Data.Add("DictKey", true) throw } fault { ldstr "1: Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } } filter { ldstr "2a: Filter logic" call void [mscorlib]System.Console::WriteLine(string) // IL for: // (bool)((Exception)e).Data["DictKey"] endfilter }{ ldstr "2b: Filter handler" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } } catch object { ldstr "3: Catch handler" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } Return: // rest of method If the filter handler is engaged (true is inserted into the exception dictionary) then the filter handler gets engaged, and the following gets printed to the console: 2a: Filter logic 1: Fault handler 2b: Filter handler and if the filter handler isn't engaged, then the following is printed: 2a:Filter logic 1: Fault handler 3: Catch handler Filter handler execution The filter handler is executed first. Hmm, ok. Well, what happens if we replaced the fault block with the C# equivalent (with the exception dictionary value set to false)? .try { // throw exception } catch object { ldstr "1: Fault handler" call void [mscorlib]System.Console::WriteLine(string) rethrow } we get this: 1: Fault handler 2a: Filter logic 3: Catch handler The fault handler is executed first, instead of the filter block. Eh? This change in behaviour is due to the way the CLR searches for exception handlers. When an exception is thrown, the CLR stops execution of the thread, and searches up the stack for an exception handler that can handle the exception and stop it propagating further - catch or filter handlers. It checks the type clause of catch clauses, and executes the code in filter blocks to see if the filter can handle the exception. When the CLR finds a valid handler, it saves the handler's location, then goes back to where the exception was thrown and executes fault and finally blocks between there and the handler location, discarding stack frames in the process, until it reaches the handler. So? By replacing a fault with a catch, we have changed the semantics of when the filter code is executed; by using a rethrow instruction, we've split up the exception handler search into two - one search to find the first catch, then a second when the rethrow instruction is encountered. This is only really obvious when mixing C# exception handlers with fault or filter handlers, so this doesn't affect code written only in C#. However it could cause some subtle and hard-to-debug effects with object initialization and ordering when using and calling code written in a language that can compile fault and filter handlers.

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • Error using SoapClient() in PHP [migrated]

    - by Dhaval
    I'm trying to access WSDL(Web Service Definition Language) file using SoapClient() of PHP. I found that WSDL file is authenticated. I tried with passing credentials on an array by another parameter and active SSL on my server, still I'm getting an error. Here is the code I'm using: $client = new SoapClient("https://webservices.chargepointportal.net:8081/coulomb_api_1.1.wsdl",array("trace" = "1","Username" = "username","Password" = "password")); Here is the error I'm getting: Warning: SoapClient::SoapClient(https://webservices.chargepointportal.net:8081/coulomb_api_1.1.wsdl) [soapclient.soapclient]: failed to open stream: Connection timed out in PATH_TO_FILE on line 80 Warning: SoapClient::SoapClient() [soapclient.soapclient]: I/O warning : failed to load external entity "https://webservices.chargepointportal.net:8081/coulomb_api_1.1.wsdl" in PATH_TO_FILE on line 80 Fatal error: Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'https://webservices.chargepointportal.net:8081/coulomb_api_1.1.wsdl' : failed to load external entity "https://webservices.chargepointportal.net:8081/coulomb_api_1.1.wsdl" in PATH_TO_FILE:80 Stack trace: #0 /home2/wingstec/public_html/widget/API/index.php(80): SoapClient-SoapClient('https://webserv...', Array) #1 {main} thrown in PATH_TO_FILE on line 80 It seems that error says file not exist at the path we given but when we run that path directly on browser then we're getting that file Can anyone help me to figure out what the exactly problem is?

    Read the article

  • Google Analytics checkout page tracking problem

    - by Amir E. Habib
    I am running a multilingual website, each lang on a different domain name. I am trying to lead all purchase requests to the checkout progress, which has its own domain too. In order to keep Google Analytics tracking I've updated the Google Analytics code accordingly. I set the source domain to 'multiple top-level domains'. Everything is going fine so far unless in E-commerce Overview; the "Sources / Medium" is always showing as (direct) - or the name of the source domain. Since I am redirecting using PHP header(location:.. etc.) the Google _link method doesn't seem to be working properly - I want to focus on two questions: Should I create a new profile for the checkout domain in Google Analytics? (I am now using the profile ID of the source domain even though I move to the checkout domain, si that OK?) When I'm trying to pass the cookies of the source domain to the checkout domain, I notice that the Google cookies are copied to the new domain (the cookie path is .checkout-domain/) and they have the same values of the original cookies - But for some reason another set of cookies is created once I access a page with google analytics code in the checkout pages, with different values (same path). Feels like I'm doing something wrong here, so my question is - What am I doing wrong here? Does anyone have an idea how to pass the cookies to the checkout domain?

    Read the article

  • Non-zero exit status for clean exit

    - by trinithis
    Is it acceptable to return a non-zero exit code if the program in question ran properly? For example, say I have a simple program that (only) does the following: Program takes N arguments. It returns an exit code of min(N, 255). Note that any N is valid for the program. A more realistic program might return different codes for successfully ran programs that signify different things. Should these programs instead write this information to a stream instead, such as to stdout?

    Read the article

  • Generic Repository with SQLite and SQL Compact Databases

    - by Andrew Petersen
    I am creating a project that has a mobile app (Xamarin.Android) using a SQLite database and a WPF application (Code First Entity Framework 5) using a SQL Compact database. This project will even eventually have a SQL Server database as well. Because of this I am trying to create a generic repository, so that I can pass in the correct context depending on which application is making the request. The issue I ran into is my DataContext for the SQL Compact database inherits from DbContext and the SQLite database inherits from SQLiteConnection. What is the best way to make this generic, so that it doesn't matter what kind of database is on the back end? This is what I have tried so far on the SQL Compact side: public interface IRepository<TEntity> { TEntity Add(TEntity entity); } public class Repository<TEntity, TContext> : IRepository<TEntity>, IDisposable where TEntity : class where TContext : DbContext { private readonly TContext _context; public Repository(DbContext dbContext) { _context = dbContext as TContext; } public virtual TEntity Add(TEntity entity) { return _context.Set<TEntity>().Add(entity); } } And on the SQLite side: public class ElverDatabase : SQLiteConnection { static readonly object Locker = new object(); public ElverDatabase(string path) : base(path) { CreateTable<Ticket>(); } public int Add<T>(T item) where T : IBusinessEntity { lock (Locker) { return Insert(item); } } }

    Read the article

  • Exception while bamboo is checking for changes in svn

    - by tangens
    While configuring a new test plan in Atlassian bamboo, I get the following exception in the log: Recording error: Unable to detect changes : <projectname> Caused by: org.tmatesoft.svn.core.SVNException: svn: Unrecognized SSL message, plaintext connection? svn: OPTIONS request failed on <repo-path> I tried to access the svn repository by hand and it worked (URL looks like this: https://myrepo.org/path/to/repo). The svn sits behind an apache http server. Both the bamboo server and the svn server are running Debian 4.0. Has anybody seen this error before and can give me a hint how to fix it?

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • How To Get Web Site Thumbnail Image In ASP.NET

    - by SAMIR BHOGAYTA
    Overview One very common requirement of many web applications is to display a thumbnail image of a web site. A typical example is to provide a link to a dynamic website displaying its current thumbnail image, or displaying images of websites with their links as a result of search (I love to see it on Google). Microsoft .NET Framework 2.0 makes it quite easier to do it in a ASP.NET application. Background In order to generate image of a web page, first we need to load the web page to get their html code, and then this html needs to be rendered in a web browser. After that, a screen shot can be taken easily. I think there is no easier way to do this. Before .NET framework 2.0 it was quite difficult to use a web browser in C# or VB.NET because we either have to use COM+ interoperability or third party controls which becomes headache later. WebBrowser control in .NET framework 2.0 In .NET framework 2.0 we have a new Windows Forms WebBrowser control which is a wrapper around old shwdoc.dll. All you really need to do is to drop a WebBrowser control from your Toolbox on your form in .NET framework 2.0. If you have not used WebBrowser control yet, it's quite easy to use and very consistent with other Windows Forms controls. Some important methods of WebBrowser control are. public bool GoBack(); public bool GoForward(); public void GoHome(); public void GoSearch(); public void Navigate(Uri url); public void DrawToBitmap(Bitmap bitmap, Rectangle targetBounds); These methods are self explanatory with their names like Navigate function which redirects browser to provided URL. It also has a number of useful overloads. The DrawToBitmap (inherited from Control) draws the current image of WebBrowser to the provided bitmap. Using WebBrowser control in ASP.NET 2.0 The Solution Let's start to implement the solution which we discussed above. First we will define a static method to get the web site thumbnail image. public static Bitmap GetWebSiteThumbnail(string Url, int BrowserWidth, int BrowserHeight, int ThumbnailWidth, int ThumbnailHeight) { WebsiteThumbnailImage thumbnailGenerator = new WebsiteThumbnailImage(Url, BrowserWidth, BrowserHeight, ThumbnailWidth, ThumbnailHeight); return thumbnailGenerator.GenerateWebSiteThumbnailImage(); } The WebsiteThumbnailImage class will have a public method named GenerateWebSiteThumbnailImage which will generate the website thumbnail image in a separate STA thread and wait for the thread to exit. In this case, I decided to Join method of Thread class to block the initial calling thread until the bitmap is actually available, and then return the generated web site thumbnail. public Bitmap GenerateWebSiteThumbnailImage() { Thread m_thread = new Thread(new ThreadStart(_GenerateWebSiteThumbnailImage)); m_thread.SetApartmentState(ApartmentState.STA); m_thread.Start(); m_thread.Join(); return m_Bitmap; } The _GenerateWebSiteThumbnailImage will create a WebBrowser control object and navigate to the provided Url. We also register for the DocumentCompleted event of the web browser control to take screen shot of the web page. To pass the flow to the other controls we need to perform a method call to Application.DoEvents(); and wait for the completion of the navigation until the browser state changes to Complete in a loop. private void _GenerateWebSiteThumbnailImage() { WebBrowser m_WebBrowser = new WebBrowser(); m_WebBrowser.ScrollBarsEnabled = false; m_WebBrowser.Navigate(m_Url); m_WebBrowser.DocumentCompleted += new WebBrowserDocument CompletedEventHandler(WebBrowser_DocumentCompleted); while (m_WebBrowser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); m_WebBrowser.Dispose(); } The DocumentCompleted event will be fired when the navigation is completed and the browser is ready for screen shot. We will get screen shot using DrawToBitmap method as described previously which will return the bitmap of the web browser. Then the thumbnail image is generated using GetThumbnailImage method of Bitmap class passing it the required thumbnail image width and height. private void WebBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser m_WebBrowser = (WebBrowser)sender; m_WebBrowser.ClientSize = new Size(this.m_BrowserWidth, this.m_BrowserHeight); m_WebBrowser.ScrollBarsEnabled = false; m_Bitmap = new Bitmap(m_WebBrowser.Bounds.Width, m_WebBrowser.Bounds.Height); m_WebBrowser.BringToFront(); m_WebBrowser.DrawToBitmap(m_Bitmap, m_WebBrowser.Bounds); m_Bitmap = (Bitmap)m_Bitmap.GetThumbnailImage(m_ThumbnailWidth, m_ThumbnailHeight, null, IntPtr.Zero); } One more example here : http://www.codeproject.com/KB/aspnet/Website_URL_Screenshot.aspx

    Read the article

  • Different settings for secure & non-secure versions of Django site using WSGI

    - by Jordan Reiter
    I have a Django website where some of the URLs need to be served over HTTPS and some over a normal connection. It's running on Apache and using WSGI. Here's the config: <VirtualHost example.org:80> ServerName example.org DocumentRoot /var/www/html/mysite WSGIDaemonProcess mysite WSGIProcessGroup mysite WSGIScriptAlias / /path/to/mysite/conferencemanager.wsgi </VirtualHost> <VirtualHost *:443> ServerName example.org DocumentRoot /var/www/html/mysite WSGIProcessGroup mysite SSLEngine on SSLCertificateFile /etc/httpd/certs/aace.org.crt SSLCertificateKeyFile /etc/httpd/certs/aace.org.key SSLCertificateChainFile /etc/httpd/certs/gd_bundle.crt WSGIScriptAlias / /path/to/mysite/conferencemanager_secure.wsgi </VirtualHost> When I restart the server, the first site that gets called -- https or http -- appears to select which WSGI script alias gets used. I just need a few settings to be different for the secure server, which is why I'm using a different WSGI script. Alternatively, it there's a way to change settings in the settings.py file based on whether the connection is secure or not, that would also work. Thanks

    Read the article

  • Use a template to get alternate behaviour?

    - by Serge
    Is this a bad practice? const int sId(int const id); // true/false it doesn't matter template<bool i> const int sId(int const id) { return this->id = id; } const int MCard::sId(int const id){ MCard card = *this; this->id = id; this->onChange.fire(EventArgs<MCard&, MCard&>(*this, card)); return this->id; } myCard.sId(9); myCard.sId<true>(8); As you can see, my goal is to be able to have an alternative behaviour for sId. I know I could use a second parameter to the function and use a if, but this feels more fun (imo) and might prevent branch prediction (I'm no expert in that field). So, is it a valid practice, and/or is there a better approach?

    Read the article

  • If Expression True in Immediate Window But Code In if Block Never Runs

    - by Julian
    I set a break point in my code in MonoDevelop to break whenever I click on a surface. I then enter the immediate window and test to see if the the if statement will return true in comparing two Vector3's. It does return true. However, when I step over, the code is never run as though the statement evaluated false. Does anyone know how this could be possible? I've attached a screenshot. Here is the picture of my debug window and immediate window. You can see where the immediate window evaluates to true. The second breakpoint is not hit. Here are the details of the two Vector3's I am comparing. Does anyone know why I am experiencing this? It really seems like an anomaly to me :/ Does it have something to do with threading?

    Read the article

  • What useful things can one add to one's .bashrc ?

    - by gyaresu
    Is there anything that you can't live without and will make my life SO much easier? Here are some that I use ('diskspace' & 'folders' are particularly handy). # some more ls aliases alias ll='ls -alh' alias la='ls -A' alias l='ls -CFlh' alias woo='fortune' alias lsd="ls -alF | grep /$" # This is GOLD for finding out what is taking so much space on your drives! alias diskspace="du -S | sort -n -r |more" # Command line mplayer movie watching for the win. alias mp="mplayer -fs" # Show me the size (sorted) of only the folders in this directory alias folders="find . -maxdepth 1 -type d -print | xargs du -sk | sort -rn" # This will keep you sane when you're about to smash the keyboard again. alias frak="fortune" # This is where you put your hand rolled scripts (remember to chmod them) PATH="$HOME/bin:$PATH"

    Read the article

  • ZenGallery: a minimalist image gallery for Orchard

    - by Bertrand Le Roy
    There are quite a few image gallery modules for Orchard but they were not invented here I wanted something a lot less sophisticated that would be as barebones and minimalist as possible out of the box, to make customization extremely easy. So I made this, in less than two days (during which I got distracted a lot). Nwazet.ZenGallery uses existing Orchard features as much as it can: Galleries are just a content part that can be added to any type The set of photos in a gallery is simply defined by a folder in Media Managing the images in a gallery is done using the standard media management from Orchard Ordering of photos is simply alphabetical order of the filenames (use 1_, 2_, etc. prefixes if you have to) The path to the gallery folder is mapped from the content item using a token-based pattern The pattern can be set per content type You can edit the generated gallery path for each item The default template is just a list of links over images, that get open in a new tab No lightbox script comes with the module, just customize the template to use your favorite script. Light, light, light. Rather than explaining in more details this very simple module, here is a video that shows how I used the module to add photo galleries to a product catalog: Adding a gallery to a product catalog You can find the module on the Orchard Gallery: https://gallery.orchardproject.net/List/Modules/Orchard.Module.Nwazet.ZenGallery/ The source code is available from BitBucket: https://bitbucket.org/bleroy/nwazet.zengallery

    Read the article

  • How to sort a ListView control by a column in Visual C#

    - by bconlon
    Microsoft provide an article of the same name (previously published as Q319401) and it shows a nice class 'ListViewColumnSorter ' for sorting a standard ListView when the user clicks the column header. This is very useful for String values, however for Numeric or DateTime data it gives odd results. E.g. 100 would come before 99 in an ascending sort as the string compare sees 1 < 9. So my challenge was to allow other types to be sorted. This turned out to be fairly simple as I just needed to create an inner class in ListViewColumnSorter which extends the .Net CaseInsensitiveComparer class, and then use this as the ObjectCompare member's type. Note: Ideally we would be able to use IComparer as the member's type, but the Compare method is not virtual in CaseInsensitiveComparer , so we have to create an exact type: public class ListViewColumnSorter : IComparer {     private CaseInsensitiveComparer ObjectCompare;     private MyComparer ObjectCompare;     ... rest of Microsofts class implementation... } Here is my private inner comparer class, note the 'new int Compare' as Compare is not virtual, and also note we pass the values to the base compare as the correct type (e.g. Decimal, DateTime) so they compare correctly: private class MyComparer : CaseInsensitiveComparer {     public new int Compare(object x, object y)     {         try         {             string s1 = x.ToString();             string s2 = y.ToString();               // check for a numeric column             decimal n1, n2 = 0;             if (Decimal.TryParse(s1, out n1) && Decimal.TryParse(s2, out n2))                 return base.Compare(n1, n2);             else             {                 // check for a date column                 DateTime d1, d2;                 if (DateTime.TryParse(s1, out d1) && DateTime.TryParse(s2, out d2))                     return base.Compare(d1, d2);             }         }         catch (ArgumentException) { }           // just use base string compare         return base.Compare(x, y);     } } You could extend this for other types, even custom classes as long as they support ICompare. Microsoft also have another article How to: Sort a GridView Column When a Header Is Clicked that shows this for WPF, which looks conceptually very similar. I need to test it out to see if it handles non-string types. #

    Read the article

  • Matrix multiplication - Scene Graphs

    - by bgarate
    I wrote a MatrixStack class in C# to use in a SceneGraph. So, to get the world matrix for an object I am suposed to use: WorldMatrix = ParentWorld * LocalTransform But, in fact, it only works as expected when I do the other way: WorldMatrix = LocalTransform * ParentWorld Mi code is: public class MatrixStack { Stack<Matrix> stack = new Stack<Matrix>(); Matrix result = Matrix.Identity; public void PushMatrix(Matrix matrix) { stack.Push(matrix); result = matrix * result; } public Matrix PopMatrix() { result = Matrix.Invert(stack.Peek()) * result; return stack.Pop(); } public Matrix Result { get { return result; } } public void Clear() { stack.Clear(); result = Matrix.Identity; } } Why it works this way and not the other? Thanks!

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • InvokeRequired not reliable?

    - by marocanu2001
    InvalidOperationException: Cross-thread operation not valid: Control 'progressBar' accessed from a thread other than the thread it was created ... Now this is not a nice way to start a day! not even if it's a Monday! So you have seen this and already thought, come on, this is an old one, just ask whether InvokeRequired before calling the method in the control, something like this: if (progressBar.InvokeRequired) {           progressBar.Invoke  ( new MethodInvoker( delegate {  progressBar.Value = 50;    }) ) }else      progressBar.Value = 50;    Unfortunately this was not working the way I would have expected, I got this error, debugged and though in debugging the InvokeRequired had become true , the error was thrown on the branch that did not required Invoke. So there was a scenario where InvokeRequired returned false and still accessing the control was not on the right thread ... Problem was that I kept showing and hiding the little form showing the progressbar. The progressbar was updating on an event  , ProgressChanged and I could not guarantee the little form was loaded by the time the event was thrown. So , spotted the problem, if none of the parents of the control you try to modify is created at the time of the method invoking, the InvokeRequired returns true! That causes your code to execute on the woring thread. Of course, updating UI before the win dow was created is not a legitimate action either, still I would have expected a different error. MSDN: "If the control's handle does not yet exist, InvokeRequired searches up the control's parent chain until it finds a control or form that does have a window handle. If no appropriate handle can be found, the InvokeRequired method returns false. This means that InvokeRequired can return false if Invoke is not required (the call occurs on the same thread), or if the control was created on a different thread but the control's handle has not yet been created." Have  a look at InvokeRequired's implementation: public bool InvokeRequired {     get     {         HandleRef hWnd;         int lpdwProcessId;         if (this.IsHandleCreated)         {             hWnd = new HandleRef(this, this.Handle);         }         else         {             Control wrapper = this.FindMarshallingControl();             if (!wrapper.IsHandleCreated)             {                 return false; // <==========             }             hWnd = new HandleRef(wrapper, wrapper.Handle);         }         int windowThreadProcessId = SafeNativeMethods.GetWindowThreadProcessId(hWnd, out lpdwProcessId);         int currentThreadId = SafeNativeMethods.GetCurrentThreadId();         return (windowThreadProcessId != currentThreadId);     } } Here 's a good article about this and a workaround http://www.ikriv.com/en/prog/info/dotnet/MysteriousHang.html

    Read the article

  • Change Win7 desktop background via Win2k3 Group Policy

    - by microchasm
    I'm experiencing some strange behavior: I have set the following policies: User Configuration\Administrative templates\Desktop\Active Desktop Enable active desktop [enabled] Active Desktop Wallpaper [set to local path -- quadruple checked; path is correct] Allow only bitmap wallpaper [disabled] gpupdate /force, log out, log back in, and the background is just black. If I go into themes, and select Windows 7, the appointed background then shows (it also flashes when logging out). gpresult I've tried: http://social.technet.microsoft.com/Forums/en/winserverGP/thread/a1ebfe81-421e-4630-8c1f-8068222ee533 http://support.microsoft.com/default.aspx?scid=kb;EN-US;977944 http://social.technet.microsoft.com/Forums/en/w7itproui/thread/5b9e513a-d504-451d-a121-b4f94893d96d and a few other things, but nothing seems to be working :/ Thanks for any help/tips/advice.

    Read the article

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • IIS in VirtualBox serving files from shared Ubuntu folder

    - by queen3
    Here's the problem: I want to use Ubuntu. But I need to develop ASP.NET (MVC) sites. So I setup VirtualBox with Win2003 and IIS6. But I would prefer my working files to be located in my Ubuntu home folder. So I setup shared folder in VirtualBox and make IIS6 virtual directory work from there. The problem is, IIS6 can't do this. Whatever I try (mapped drive, network uri path) I get different IIS errors: can't access folder (for mapped drive), can't monitor file system changes (\vboxsvr share path), and so on. Is there a way for IIS6 in virtual machine to configure virtual application folder to be on host machine (Ubuntu) - be it shared folder, mapped drive, smb share, or whatever?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >