Search Results

Search found 7954 results on 319 pages for 'behavior driven developme'.

Page 269/319 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Why is calling close() after fopen() not closing?

    - by Richard Morgan
    I ran across the following code in one of our in-house dlls and I am trying to understand the behavior it was showing: long GetFD(long* fd, const char* fileName, const char* mode) { string fileMode; if (strlen(mode) == 0 || tolower(mode[0]) == 'w' || tolower(mode[0]) == 'o') fileMode = string("w"); else if (tolower(mode[0]) == 'a') fileMode = string("a"); else if (tolower(mode[0]) == 'r') fileMode = string("r"); else return -1; FILE* ofp; ofp = fopen(fileName, fileMode.c_str()); if (! ofp) return -1; *fd = (long)_fileno(ofp); if (*fd < 0) return -1; return 0; } long CloseFD(long fd) { close((int)fd); return 0; } After repeated calling of GetFD with the appropriate CloseFD, the whole dll would no longer be able to do any file IO. I wrote a tester program and found that I could GetFD 509 times, but the 510th time would error. Using Process Explorer, the number of Handles did not increase. So it seems that the dll is reaching the limit for the number of open files; setting _setmaxstdio(2048) does increase the amount of times we can call GetFD. Obviously, the close() is working quite right. After a bit of searching, I replaced the fopen() call with: long GetFD(long* fd, const char* fileName, const char* mode) { *fd = (long)open(fileName, 2); if (*fd < 0) return -1; return 0; } Now, repeatedly calling GetFD/CloseFD works. What is going on here?

    Read the article

  • Android Webkit Input elements buggy with CSS3 translate3D

    - by Dansl
    I'm having a couple issues with the Input element in a Webkit Android App i'm developing. Testing on 2.X, but 3.x doesn't seem to have these issues... The app works by having separate Div's for each "page", and I'm using CSS3 translate3D to animate between the pages. Some of those pages include Input elements. When I tap on the input to gain focus, any of my "position:fixed" Div's will shift about 5px from the top, and 5px left. Now the kicker... it will eventually fix itself, and then never happen again when you tap on an input, its only that first time... My other problem, the Input elements are screwy with keyboards, for instance, spell corrections/autocomplete will not input text, and when using Swype Keyboard, you can't "swipe" the word, ONLY individual taps for each letter will input text into the Input element. I've read that a lot of these might be caused by CSS3 Translate3D. But, I've tried just about everything to fix these issues, and I've searched just about every site for a solution, but havent been able to find a fix, or find anyone else with this issue... Does anyone else have these issues, or know of a fix? (Possible solution??) Anyone know of a way to override the default behavior of Input elements in the webkit? I wonder if I can generate my own TextView and position it over these input fields...? Any help is greatly appreciated :)

    Read the article

  • Javascript Getting Objects to Fallback to One Another

    - by Ian
    Here's a ugly bit of Javascript it would be nice to find a workaround. Javascript has no classes, and that is a good thing. But it implements fallback between objects in a rather ugly way. The foundational construct should be to have one object that, when a property fails to be found, it falls back to another object. So if we want a to fall back to b we would want to do something like: a = {sun:1}; b = {dock:2}; a.__fallback__ = b; then a.dock == 2; But, Javascript instead provides a new operator and prototypes. So we do the far less elegant: function A(sun) { this.sun = sun; }; A.prototype.dock = 2; a = new A(1); a.dock == 2; But aside from elegance, this is also strictly less powerful, because it means that anything created with A gets the same fallback object. What I would like to do is liberate Javascript from this artificial limitation and have the ability to give any individual object any other individual object as its fallback. That way I could keep the current behavior when it makes sense, but use object-level inheritance when that makes sense. My initial approach is to create a dummy constructor function: function setFallback(from_obj, to_obj) { from_obj.constructor = function () {}; from_obj.constructor.prototype = to_obj; } a = {sun:1}; b = {dock:2}; setFallback(a, b); But unfortunately: a.dock == undefined; Any ideas why this doesn't work, or any solutions for an implementation of setFallback? (I'm running on V8, via node.js, in case this is platform dependent)

    Read the article

  • how to set style through javascript in IE immediately

    - by rezna
    Hi, recently I've encountered a problem with IE. I have a function function() { ShowProgress(); DoSomeWork(); HideProgress(); } where ShowProgress and HideProgress just manipulate the 'display' CSS style using jQuery's css() method. In FF everything is OK, and at the same time I change the display property to block, progress-bar appears. But not in IE. In IE the style is applied, once I leave the function. Which means it's never shown, because at the end of the function I simply hide it. (if I remove the HideProgress line, the progress-bar appears right after finishing executing the function (more precisely, immediately when the calling functions ends - and so there's nothing else going on in IE)). Has anybody encountered this behavior? Is there a way to get IE to apply the style immediately? I've prepared a solution but it would take me some time to implement it. My DoSomeWork() method is doing some AJAX calls, and these are right now synchronous. I assume that making them asynchronous will kind of solve the problem, but I have to redesign the code a bit, so finding a solution just for applying the style immediately would much simplier. Thanks rezna

    Read the article

  • ImageViews sometimes not displaying in FrameLayout activity

    - by Ken
    The top level layout in my activity is a framelayout. I have completed, debugged and tested this app and it works exactly like it should in all respects on my g1 and on various emulators. But on 3.7-inch displays running 2.1+, some imageviews packed in a linearlayout are periodically not visible. I know that they are there because you can touch and drag them with effect in the app. So I assume somehow they have gotten under the SurfaceView that is the main component of the app. This is apparently so even though the SurfaceView is declared in the xml prior to the LinearLayout. However, the ImageViews IN the LinearLayout are added programmatically towards the end of onCreate(). Framelayout stacks everything that is added to it, one on top of the other--the only way you will see more than one child of a frame layout is if they are smaller than the screen and are placed apart from eachother. Oddly, sometimes the imageviews ARE visible--it is random. Anyway, I've been trying to combat this with framelayout.bringChildToFront(View v) on the linearlayout without success. I wonder if anyone has any insight into how the behavior could be random like that, and how I should code these imageviews to keep this from happening, and why the problem appears only to occur on 3.7 vs 3.2 inch screens (as it happens, the two 3.2-inch screens were both htc, so vendor might be factor too). [edit] Actually, I've determined that this is a 2.2 issue, not a screen size (or even vendor) issue. Can't ensure that ImageViews added to a framelayout with a SurfaceView in it will appear on top of the surfaceview. I ran some tests in the respective onDraw() methods and the imageviews are 'visible' (0), and nothing does anything to the alpha of the drawables, which are there as well at ondraw(). [/edit] Any insight would be welcomed. Ken T.

    Read the article

  • How to specify allowed exceptions in WCF's configuration file?

    - by tucaz
    Hello! I´m building a set of WCF services for internal use through all our applications. For exception handling I created a default fault class so I can return treated message to the caller if its the case or a generic one when I have no clue what happened. Fault contract: [DataContract(Name = "DefaultFault", Namespace = "http://fnac.com.br/api/2010/03")] public class DefaultFault { public DefaultFault(DefaultFaultItem[] items) { if (items == null || items.Length== 0) { throw new ArgumentNullException("items"); } StringBuilder sbItems = new StringBuilder(); for (int i = 0; i Specifying that my method can throw this exception so the consuming client will be aware of it: [OperationContract(Name = "PlaceOrder")] [FaultContract(typeof(DefaultFault))] [WebInvoke(UriTemplate = "/orders", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, Method = "POST")] string PlaceOrder(Order newOrder); Most of time we will use just .NET to .NET communication with usual binds and everything works fine since we are talking the same language. However, as you can see in the service contract declaration I have a WebInvoke attribute (and a webHttp binding) in order to be able to also talk JSON since one of our apps will be built for iPhone and this guy will talk JSON. My problem is that whenever I throw a FaultException and have includeExceptionDetails="false" in the config file the calling client will get a generic HTTP error instead of my custom message. I understand that this is the correct behavior when includeExceptionDetails is turned off, but I think I saw some configuration a long time ago to allow some exceptions/faults to pass through the service boundaries. Is there such thing like this? If not, what do u suggest for my case? Thanks a LOT!

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • Setting Nullable Integer to String Containing Nothing yields 0

    - by Brian MacKay
    I've been pulling my hair out over some unexpected behavior from nullable integers. If I set an Integer to Nothing, it becomes Nothing as expected. If I set an Integer? to a String that is Nothing, it becomes 0! Of course I get this whether I explicitly cast the String to Integer? or not. I realize I could work around this pretty easily but I want to know what I'm missing. Dim NullString As String = Nothing Dim NullableInt As Integer? = CType(NullString, Integer?) 'Expected NullableInt to be Nothing, but it's 0! NullableInt = Nothing 'This works -- NullableInt now contains Nothing. How is this EDIT: Previously I had my code up here so without the explicit conversion to 'Integer?' and everyone seemed to be fixated on that. I want to be clear that this is not an issue that would have been caught by Option Strict On -- check out the accepted answer. This is a quirk of the string-to-integer conversion rules which predate nullable types, but still impact them.

    Read the article

  • UIButton stops responding after going into landscape mode - iPhone

    - by casey
    I've been trying different things the last few days and I've run out of ideas so I'm looking for help. The situation is that I'm displaying my in-app purchasing store view after the user clicks a button. Button pressed, view is displayed. The store shows fine. Inside this view, I have a few labels with descriptions of the product, and then below them I have the price and a Buy button which triggers the in-app purchase. Problem is when I rotate the phone to landscape, that Buy button no longer responds, weird. Works fine in portrait. The behavior in landscape when the I touch the button is nothing. It doesn't appear to press down and be selected or anything, just not responding to my touches. But then when I rotate back to portrait or even upside down portrait, it works fine. Here is the rough structure of my view in IB, all the rotating and layout is setup in IB. I set the autoresizing in IB so that everything looks ok in landscape and the Buy button expands horizontally a little bit. The only layout manipulation I do in my code is after loading, I set the content size of the scroll view. File Owner with view set to the scrollView / scrollView ----/ view --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ uibutton (Buy) After orientation changes I printed out the userInteractionEnabled property of the scrollView and the button, and they were both TRUE at all orientations. Ideas? Or maybe some other way of displaying a buy button that won't be nonfunctional? I've already begun a branch that plays with a toolbar and placing the buy button there, but I can't seem to get the bar to stay in place while scrolling.

    Read the article

  • In a combobox, how do I determine the highlighted item (not selected item)?

    - by Harold Bamford
    First, fair warning: I am a complete newbie with C# and WPF. I have a combobox (editable, searchable) and I would like to be able to intercept the Delete key and remove the currently highlighted item from the list. The behavior I'm looking for is like that of MS Outlook when entering in email addresses. When you give a few characters, a dropdown list of potential matches is displayed. If you move to one of these (with the arrow keys) and hit Delete, that entry is permanently removed. I want to do that with an entry in the combobox. Here is the XAML (simplified): <ComboBox x:Name="Directory" KeyUp="Directory_KeyUp" IsTextSearchEnabled="True" IsEditable="True" Text="{Binding Path=CurrentDirectory, Mode=TwoWay}" ItemsSource="{Binding Source={x:Static self:Properties.Settings.Default}, Path=DirectoryList, Mode=TwoWay}" / The handler is: private void Directory_KeyUp(object sender, KeyEventArgs e) { ComboBox box = sender as ComboBox; if (box.IsDropDownOpen && (e.Key == Key.Delete)) { TrimCombobox("DirectoryList", box.HighlightedItem); // won't compile! } } When using the debugger, I can see box.HighlightedItem has the value I want but when I try and put in that code, it fails to compile with: System.Windows.Controls.ComboBox' does not contain a definition for 'HighlightedItem'... So: how do I access that value? Keep in mind that the item has not been selected. It is merely highlighted as the mouse hovers over it. Thanks for your help.

    Read the article

  • Unity: Replace registered type with another type at runtime

    - by gehho
    We have a scenario where the user can choose between different hardware at runtime. In the background we have several different hardware classes which all implement an IHardware interface. We would like to use Unity to register the currently selected hardware instance for this interface. However, when the user selects another hardware, this would require us to replace this registration at runtime. The following example might make this clearer: public interface IHardware { // some methods... } public class HardwareA : IHardware { // ... } public class HardwareB : IHardware { // ... } container.RegisterInstance<IHardware>(new HardwareA()); // user selects new hardware somewhere in the configuration... // the following is invalid code, but can it be achieved another way? container.ReplaceInstance<IHardware>(new HardwareB()); Can this behavior be achieved somehow? BTW: I am completely aware that instances which have already been resolved from the container will not be replaced with the new instances, of course. We would take care of that ourselves by forcing them to resolve the instance once again.

    Read the article

  • In Google Chrome, how do I bring an existing popup window to the front using javascript from the par

    - by brahn
    I would like to have a button on a web page with the following behavior: On the first click, open a pop-up. On later clicks, if the pop-up is still open, just bring it to the front. If not, re-open. The below code works in Firefox (Mac & Windows), Safari (Mac & Windows), and IE8. (I have not yet tested IE6 or IE7.) However, in Google Chrome (both Mac & Windows) later clicks fail to bring the existing pop-up to the front as desired. How can I make this work in Chrome? <head> <script type="text/javascript"> var popupWindow = null; var doPopup = function () { if (popupWindow && !popupWindow.closed) { popupWindow.focus(); } else { popupWindow = window.open("http://google.com", "_blank", "width=200,height=200"); } }; </script> </head> <body> <button onclick="doPopup(); return false"> create a pop-up </button> </body> Background: I am re-asking this question specifically for Google Chrome, as I think I my code solves the problem at least for other modern browsers and IE8. If there is a preferred etiquette for doing so, please let me know.

    Read the article

  • How do I implement configurations and settings?

    - by Malvolio
    I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default. So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live. Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples. The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.) What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • How might one cope with the ambiguous value produced by GetDllDirectory?

    - by Integer Poet
    GetDllDirectory produces an ambiguous value. When the string this call produces is empty, it means one of the following: nobody has called SetDllDirectory somebody passed NULL to SetDllDirectory somebody passed an empty string to SetDllDirectory The first two cases are equivalent for my purposes, but the third case is a problem. If I want to write save/restore code (call GetDllDirectory to save the "old" value, SetDllDirectory to set a "new" value temporarily, and later SetDllDirectory again to restore the "old" value), I run the risk of reversing some other programmer's intent. If the other programmer intended for the current working directory to be in the DLL search order (in other words, one of the first two bullets is true), and I pass an empty string to SetDllDirectory, I will be taking the current working directory out of the DLL search order, reversing the other programmer's intent. Can anyone suggest an approach to eliminate or work around this ambiguity? P.S. I know having the current working directory in the DLL search order could be interpreted as a security hole. Nevertheless, it is the default behavior, and my code is not in a position to undo that; my code needs to be compatible with the expectations of all potential callers, many of which are large and old and beyond my control.

    Read the article

  • Rails Model inheritance in forms

    - by Tiago
    I'm doing a reporting system for my app. I created a model ReportKind for example, but as I can report a lot of stuff, I wanted to make different groups of report kinds. Since they share a lot of behavior, I'm trying to use inheritance. So I have the main model: model ReportKind << ActiveRecord::Base end and created for example: model UserReportKind << ReportKind end In my table report_kinds I've the type column, and until here its all working. My problem is in the forms/controllers. When I do a ReportKind.new, my form is build with the '*report_kind*' prefix. If a get a UserReportKind, even through a ReportKind.find, the form will build the 'user_report_kind' prefix. This mess everything in the controllers, since sometimes I'll have params[:report_kind], sometimes params[:user_report_kind], and so on for every other inheritance I made. Is there anyway to force it to aways use the 'report_kind' prefix? Also I had to force the attribute 'type' in the controller, because it didn't get the value direct from the form, is there a pretty way to do this? Routing was another problem, since it was trying to build routes based in the inherited models names. I overcome that by adding the other models in routes pointing to the same controller.

    Read the article

  • How to check whether user is login in web application?

    - by Morgan Cheng
    I want to learn the whole details of web application authentication. So, I decided to write a CodeIgniter authentication library from scratch. Now, I have to make design decision about how to determine whether one user is login. Basically, after user input username & password pair. A cookie is set for this session, following navigations in the web application will not require username & password. The server side will check whether the session cookie is valid to determine whether current user is login. The question is: how to determine whether cookie is valid cookie issued from server side? I can image the most simple way is to have the cookie value stored in session status as well. For each HTTP request, compare the value from cookie and the value from server session. (Since CodeIgniter session library store session variables in cookies, it is not applicable without some tweak.) This method requires storage in server side. For huge web application that is deployed in multiple datacenters. It is possible that user input username & password when browsing in one datacenter, while he/she access the web application in another datacenter later. The expected behavior is that user just input username & password once. As a result, all datacenters should be able to access the session status. That is possible not applicable even the session status is stored in external storage such as database. I tried Google. I login Google with Asian proxy which is supposed to direct me to datacenters in Asian. Then I switch to North American proxy which should direct me to datacenters in North America. It recognize my login without asking username and password again. So, is there any way to determine whether user is login without server side session status?

    Read the article

  • How to force inclusion of an object file in a static library when linking into executable?

    - by Brian Bassett
    I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior). The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma: #pragma comment (linker, "/export:_Af") Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp: extern void Af(void); static void (*Af_fp)(void) = &Af; This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?

    Read the article

  • Struts 2 discard cache header

    - by Dewfy
    I have strange discarding behavior of struts2 while setting cache option for my image. I'm trying to put image from db to be cached on client side To render image I use ( http://struts.apache.org/2.x/docs/how-can-we-display-dynamic-or-static-images-that-can-be-provided-as-an-array-of-bytes.html ) where special result type render as follow: public void execute(ActionInvocation invocation) throws Exception { ...//some preparation HttpServletResponse response = ServletActionContext.getResponse(); HttpServletRequest request = ServletActionContext.getRequest(); ServletOutputStream os = response.getOutputStream(); try { byte[] imageBytes = action.getImage(); response.setContentType("image/gif"); response.setContentLength(imageBytes.length); //I want cache up to 10 min Date future = new Date(((new Date()).getTime() + 1000 * 10*60l)); ; response.addDateHeader("Expires", future.getTime()); response.setHeader("Cache-Control", "max-age=" + 10*60 + ""); response.addHeader("cache-Control", "public"); response.setHeader("ETag", request.getRequestURI()); os.write(imageBytes); } catch(Exception e) { response.sendError(HttpServletResponse.SC_NOT_FOUND); } os.flush(); os.close(); } But when image is embedded to page it is always reloaded (Firebug shows code 200), and neither Expires, nor max-age are presented in header Host localhost:9090 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://localhost:9090/web/result?matchId=1 Cookie JSESSIONID=4156BEED69CAB0B84D950932AB9EA1AC; If-None-Match /web/_srv/teamcolor Cache-Control max-age=0 I have no idea why it is dissapered, may be problem in url? It is forms with parameter: http://localhost:9090/web/_srv/teamcolor?loginId=3

    Read the article

  • Why does TrimStart trims a char more when asked to trim "PRN.NUL" ?

    - by James
    Here is the code: namespace TrimTest { class Program { static void Main(string[] args) { string ToTrim = "PRN.NUL"; Console.WriteLine(ToTrim); string Trimmed = ToTrim.TrimStart("PRN.".ToCharArray()); Console.WriteLine(Trimmed); ToTrim = "PRN.AUX"; Console.WriteLine(ToTrim); Trimmed = ToTrim.TrimStart("PRN.".ToCharArray()); Console.WriteLine(Trimmed); ToTrim = "AUX.NUL"; Console.WriteLine(ToTrim); Trimmed = ToTrim.TrimStart("AUX.".ToCharArray()); Console.WriteLine(Trimmed); } } } The output is like this: PRN.NUL UL PRN.AUX AUX AUX.NUL NUL As you can see, the TrimStart took out the N from NUL. But it doesn't do that for other strings even if it started with PRN. I tried with .NET Framework 3.5 and 4.0 and the results are same. Are there any explanation on what causes this behavior?

    Read the article

  • Quality of TFS 2008 merged code

    - by paologios
    Does the quality of code merged by TFS 2008 depend on the used programming language? I know merging in Java / Subversion, and merging a branch to its trunk usually does not create much conflicts. Now in my company, we use VB.NET. When I merge two files TFS does not always get code blocks right, e.g. does not find the right If..then / end if lines. To give you an example, I mean: File 2 is created as a branch of File 1. Both files were changed later, now I'm going to merge those files and am recieving conficts: The marked end-if lines (1) are detected as corresponding, meaning the added event handler Button1_Click is being deleted. Now I wonder if this behavior is by language (C# vs. VB.NET) or are other source control solutions just better than TFS? (And I really liked TFS up to now :) ) File 1: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then Label1.Text = "Hello" Label2.Text = "World" End If End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then Label3.Text = "Hello Button 2" End If // .... End Sub File 2 (Branch of File 1): Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then fillTableFromDatabase() End If // (1) End Sub Protected Sub Button1_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button1.Click // do something here End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then End If // (1) // .... End Sub

    Read the article

  • Are there equivalents to Ruby's method_missing in other languages?

    - by Justin Ethier
    In Ruby, objects have a handy method called method_missing which allows one to handle method calls for methods that have not even been (explicitly) defined: Invoked by Ruby when obj is sent a message it cannot handle. symbol is the symbol for the method called, and args are any arguments that were passed to it. By default, the interpreter raises an error when this method is called. However, it is possible to override the method to provide more dynamic behavior. The example below creates a class Roman, which responds to methods with names consisting of roman numerals, returning the corresponding integer values. class Roman def romanToInt(str) # ... end def method_missing(methId) str = methId.id2name romanToInt(str) end end r = Roman.new r.iv #=> 4 r.xxiii #=> 23 r.mm #=> 2000 For example, Ruby on Rails uses this to allow calls to methods such as find_by_my_column_name. My question is, what other languages support an equivalent to method_missing, and how do you implement the equivalent in your code?

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >