Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 173/211 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • CSS/IE7: The Case of the Extending Background-Image

    - by dmr
    The situation: There a collapsible advanced search box. It is made up of a search box div that contains a boxhead div and a boxbody div. Inside the boxbody div, there is a searchToggle div. When the user clicks "Show/Hide", the display style property of the searchToggle div is toggled between block and none. (The search fields are hidden and the search boxbody gets much smaller). The 2 background-images for the body of the search box are set via the css of the searchBox div and the boxbody div. In IE7, when the searchToggle div is hidden, the background-image from the searchBox div extends on the left more than it should (see Here). It shows up correctly when the display of the searchToggle div is block (see Here). Everything show up correctly, in both cases, in IE8 and FF. The relevant HTML: <div class="searchBox"> <div class="boxhead"> <h2></h2> </div> <div class="boxbody"> <div id="searchToggle" name="searchToggle"> </div> </div> </div> The relevant CSS: .searchBox { margin: 0 auto; width: 700px; background: url(/images/myImageRight-r.gif) no-repeat bottom right; font-size: 100%; text-align: left; overflow: hidden; } .boxbody { margin: 0; padding: 5px 30px 31px; background-image: url(/images/myImageLeft.gif); background-repeat: no-repeat; background-position: left bottom; }

    Read the article

  • Android while getting HTTP response to file how to know it wasn't fully loaded?

    - by Stan
    I'm using this approach to store a big-sized response from server to parse it later: final HttpClient client = new DefaultHttpClient(new BasicHttpParams()); final HttpGet mHttpGetRequest = new HttpGet(strUrl); mHttpGetRequest.setHeader("Content-Type", "application/x-www-form-urlencoded"); FileOutputStream fos = null; try { final HttpResponse response = client.execute(mHttpGetRequest); final StatusLine statusLine = response.getStatusLine(); lastHttpErrorCode = statusLine.getStatusCode(); lastHttpErrorMsg = statusLine.getReasonPhrase(); if (lastHttpErrorCode == 200) { HttpEntity entity = response.getEntity(); fos = new FileOutputStream(reponseFile); entity.writeTo(fos); entity.consumeContent(); fos.flush(); } } catch (ClientProtocolException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); return null; } catch (final ParseException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); return null; } catch (final UnknownHostException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); return null; } catch (IOException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); } finally{ if (fos!=null) try{ fos.close(); } catch (IOException e){} } now how could I ensure the response was completely received and thus saved to file? Assume client's device lost Internet connection while this code was running. So the app received only some part of real response. And I'm pretty sure it happens cuz I got parsing exceptions like "tag not closed", "unexpected end of file" etc. So I need to detect somehow this situation to prevent code from parsing partial response but can't see how. Is it possible at all and how to do it? Or has it has to raise IOException in such cases?

    Read the article

  • Authentication and Security in my website - need advice please.

    - by Ichirichi
    Hi, I am using database with a list of username/passwords, and a simple web form that allows for users to enter their username/password. When they submit the page, I simply do a stored procedure check to authenticate. If they are authorised, then their user details (e.g. username, dob, address, company address, other important info) are stored in a custom User object and then in a session. This custom User object that I created is used throughout the web application, and also in a sub-site (session sharing). My question/problems are: Is my method of authentication the correct way to do things? I find users complaining that their session have expired although they "were not idle", possibly due the app pool recycling? They type large amounts of text and find that their session had expired and thus lose all the text typed in. I am uncertain whether the session does really reset sporadically but will Forms Authentication using cookies/cookiless resolve the issue? Alternatively should I build and store the User Object in a session, cookie or something else instead in order to be more "correct" and avoid cases like in point #2. If I go down the Forms Authentication route, I believe I cannot store my custom User object in a Forms Authentication cookie so does it mean I would store the UserID and then recreate the user object on every page? Would this not be a huge increase on the server load? Advice and answers much appreciated. L

    Read the article

  • In Ruby, why is a method invocation not able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is an instance method of the Array object, and this instance method takes a block of code, and then returns a number. If so, then it should be no different from calling a function or method and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • C++ Conceptual problem with (Pointer) Pointers

    - by Ptr
    I have a structure usually containing a pointer to an int. However, in some special cases, it is necessary that this int pointer points to another pointer which then points to an int. Wow: I mentioned the word pointer 5 times so far! Is this even possible? I thought about it that way: Instead of using a second int pointer, which is most likely not possible as my main int pointer can only point to an int and not to another int pointer, I could make it a reference like this: int intA = 1; int intB = 2; int& intC = intB; int* myPointers[ 123 ]; myPointers[ 0 ] = &intA; myPointers[ 1 ] = &intB; myPointers[ 3 ] = &intC; So the above would do what I want: The reference to intB (intC) behaves quite like I want it to (If it gets changed it also changes intB) The problem: I can't change references once they are set, right? Or is there a way? Everything in short: How do I get a value to work with * (pointers) and ** (pointers to pointers)?

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Entities used to serialize data have changed. How can the serialized data be upgraded for the new entities?

    - by i8abug
    Hi, I have a bunch of simple entity instances that I have serialized to a file. In the future, I know that the structure of these entities (ie, maybe I will rename Name to Header or something). The thing is, I don't want to lose the data that I have saved in all these old files. What is the proper way to either load the data from the old entities into new entities upgrade the old files so that they can be used with new entities Note: I think I am stuck with binary serialization, not xml serialization. Thanks in advance! Edit: So I have an answer for the case I have described. I can use a dataContractSerializer and do something like [DataMember("bar")] private string foo; and change the name in the code and keep the same name that was used for serialization. But what about the following additional cases: The original entity has new members which can be serialized Some serialized members that were in the original entity are removed Some members have actually changed in function (suppose that the original class had a FirstName and LastName member and it has been refactored to have only a FullName member which combines the two) To handle these, I need some sort of interpreter/translator deserialization class but I have no idea what I should use

    Read the article

  • Iterative Reduction to Null Matrix

    - by user1459032
    Here's the problem: I'm given a matrix like Input: 1 1 1 1 1 1 1 1 1 At each step, I need to find a "second" matrix of 1's and 0's with no two 1's on the same row or column. Then, I'll subtract the second matrix from the original matrix. I will repeat the process until I get a matrix with all 0's. Furthermore, I need to take the least possible number of steps. I need to print all the "second" matrices in O(n) time. In the above example I can get to the null matrix in 3 steps by subtracting these three matrices in order: Expected output: 1 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 0 I have coded an attempt, in which I am finding the first maximum value and creating the second matrices based on the index of that value. But for the above input I am getting 4 output matrices, which is wrong: My output: 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 My solution works for most of the test cases but fails for the one given above. Can someone give me some pointers on how to proceed, or find an algorithm that guarantees optimality? Test case that works: Input: 0 2 1 0 0 0 3 0 0 Output 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0

    Read the article

  • When to use basic types (Integer, String), and when to write a new class?

    - by belgarat
    Stackoverflow users: A lot of things can be represented in programs by using the basic types, or we can create a new class for it. Example: A social security number can be a number, string or its own object. (Other common examples: Phone numbers, names, zip codes, user id, order id and other id's.) My question is: When should the basic types be used, and when should we write ourselves a new class? I see that when you need to add behavior, you'll want to create a class (example, social security number parsing, validation, formatting, etc). But is this the only criteria? I have come across cases where many of these things are represented as java Integers and/or Strings. We loose the benefit of type-checking, and I have often seen bugs caused by parameters being mixed in calls to function(Intever, Integer, Integer, Integer). On the other hand, some programmers are opposed to over-designing by creating classes for "eveything". Obviously, the answer is "it depends". But, what do you think, and what do you normally do?

    Read the article

  • Can you explain this generics behavior and if I have a workaround?

    - by insta
    Sample program below: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace GenericsTest { class Program { static void Main(string[] args) { IRetrievable<int, User> repo = new FakeRepository(); Console.WriteLine(repo.Retrieve(35)); } } class User { public int Id { get; set; } public string Name { get; set; } } class FakeRepository : BaseRepository<User>, ICreatable<User>, IDeletable<User>, IRetrievable<int, User> { // why do I have to implement this here, instead of letting the // TKey generics implementation in the baseclass handle it? //public User Retrieve(int input) //{ // throw new NotImplementedException(); //} } class BaseRepository<TPoco> where TPoco : class,new() { public virtual TPoco Create() { return new TPoco(); } public virtual bool Delete(TPoco item) { return true; } public virtual TPoco Retrieve<TKey>(TKey input) { return null; } } interface ICreatable<TPoco> { TPoco Create(); } interface IDeletable<TPoco> { bool Delete(TPoco item); } interface IRetrievable<TKey, TPoco> { TPoco Retrieve(TKey input); } } This sample program represents the interfaces my actual program uses, and demonstrates the problem I'm having (commented out in FakeRepository). I would like for this method call to be generically handled by the base class (which in my real example is able to handle 95% of the cases given to it), allowing for overrides in the child classes by specifying the type of TKey explicitly. It doesn't seem to matter what parameter constraints I use for the IRetrievable, I can never get the method call to fall through to the base class. Also, if anyone can see an alternate way to implement this kind of behavior and get the result I'm ultimately looking for, I would be very interested to see it. Thoughts?

    Read the article

  • Can't declare an abstract method private....

    - by Zombies
    I want to do this, yet I can't. Here is my scenario and rational. I have an abstract class for test cases that has an abstract method called test(). The test() method is to be defined by the subclass; it is to be implemented with logic for a certain application, such as CRMAppTestCase extends CompanyTestCase. I don't want the test() method to be invoked directly, I want the super class to call the test() method while the sub class can call a method which calls this (and does other work too, such as setting a current date-time right before the test is executed for example). Example code: public abstract class CompanyTestCase { //I wish this would compile, but it cannot be declared private private abstract void test(); public TestCaseResult performTest() { //do some work which must be done and should be invoked whenever //this method is called (it would be improper to expect the caller // to perform initialization) TestCaseResult result = new TestCaseResult(); result.setBeginTime(new Date()); long time = System.currentTimeMillis(); test(); //invoke test logic result.setDuration(System.currentTimeMillis() - time); return result; } } Then to extend this.... public class CRMAppTestCase extends CompanyTestCase { public void test() { //test logic here } } Then to call it.... TestCaseResult result = new CRMAppTestCase().performTest();

    Read the article

  • No class def found error for JUnit Test on android

    - by J Bellamy
    I am having some very bizarre behaviour. I have a large number of test cases for my Android application, and they all work except for one. When I run this one I get a java.lang.NoClassDefFoundError: org.JUnit.test Yes, I have the JUnit 4 library imported into the project, and my other JUnit tests are running without any problems. What is particularly bizarre is that before I hit this problem I had an error in my code- basically, I tried writing a file to a read only folder. When that occurred, the JUnitTest would execute up to the point where it would hit an IO exception for accessing a part of memory it cannot access. I fix this problem, and suddenly the Android emulator doesn't seem to know what org.JUnit.test is. I have examined the run configuration for this test class, and it is the same as my others. It is in the same folder as the other tests as well. It also uses the same import statements. Any idea on what is going on? I am using the Android 10 emulator, and eclipse version 3.7.2. Edit: To clarify, the error I get appears on Logcat and not in my Eclipse project.

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • How do I daemonize an arbitrary script in unix?

    - by dreeves
    I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon. There are two common cases I'd like to deal with: I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case). I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once. Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)). Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts. More specifically, I'd like a program "daemonize" that I can run like % daemonize myscript arg1 arg2 or, for example, % daemonize 'echo `date` >> /tmp/times.txt' which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly). NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy. Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome. (If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • ARC and __unsafe_unretained

    - by J Shapiro
    I think I have a pretty good understanding of ARC and the proper use cases for selecting an appropriate lifetime qualifiers (__strong, __weak, __unsafe_unretained, and __autoreleasing). However, in my testing, I've found one example that doesn't make sense to me. As I understand it, both __weak and __unsafe_unretained do not add a retain count. Therefore, if there are no other __strong pointers to the object, it is instantly deallocated. The only difference in this process is that __weak pointers are set to nil, and __unsafe_unretained pointers are left alone. If I create a __weak pointer to a simple, custom object (composed of one NSString property), I see the expected (null) value when trying to access a property: Test * __weak myTest = [[Test alloc] init]; myTest.myVal = @"Hi!"; NSLog(@"Value: %@", myTest.myVal); // Prints Value: (null) Similarly, I would expect the __unsafe_unretained lifetime qualifier to cause a crash, due to the resulting dangling pointer. However, it doesn't. In this next test, I see the actual value: Test * __unsafe_unretained myTest = [[Test alloc] init]; myTest.myVal = @"Hi!"; NSLog(@"Value: %@", myTest.myVal); // Prints Value: Hi! Why doesn't the __unsafe_unretained object become deallocated?

    Read the article

  • Is it good practise to blank out inherited functionality that will not be used?

    - by Timo Kosig
    I'm wondering if I should change the software architecture of one of my projects. I'm developing software for a project where two sides (in fact a host and a device) use shared code. That helps because shared data, e.g. enums can be stored in one central place. I'm working with what we call a "channel" to transfer data between device and host. Each channel has to be implemented on device and host side. We have different kinds of channels, ordinary ones and special channels which transfer measurement data. My current solution has the shared code in an abstract base class. From there on code is split between the two sides. As it has turned out there are a few cases when we would have shared code but we can't share it, we have to implement it on each side. The principle of DRY (don't repeat yourself) says that you shouldn't have code twice. My thought was now to concatenate the functionality of e.g. the abstract measurement channel on the device side and the host side in an abstract class with shared code. That means though that once we create an actual class for either the device or the host side for that channel we have to hide the functionality that is used by the other side. Is this an acceptable thing to do: public abstract class MeasurementChannelAbstract { protected void MethodUsedByDeviceSide() { } protected void MethodUsedByHostSide() { } } public class DeviceMeasurementChannel : MeasurementChannelAbstract { public new void MethodUsedByDeviceSide() { base.MethodUsedByDeviceSide(); } } Now, DeviceMeasurementChannel is only using the functionality for the device side from MeasurementChannelAbstract. By declaring all methods/members of MeasurementChannelAbstract protected you have to use the new keyword to enable that functionality to be accessed from the outside. Is that acceptable or are there any pitfalls, caveats, etc. that could arise later when using the code?

    Read the article

  • In C, when do structure names have to be included in structure initializations and definitions?

    - by Tyler
    I'm reading The C Programming Language by K&R and in the section on structures I came across these code snippets: struct maxpt = { 320, 200 }; and /* addpoints: add two points */ struct addpoint(struct point p1, struct point p2) { p1.x += p2.x; p1.y += p2.y; return p1; } In the first case, it looks like it's assigning the values 320 and 200 to the members of the variable maxpt. But I noticed the name of the struct type is missing (shouldn't it be "struct struct_name maxpt = {320, 200}"? In the second case, the function return type is just "struct" and not "struct name_of_struct". I don't get why they don't include the struct names - how does it know what particular type of structure it's dealing with? My confusion is compounded by the fact that in previous snippets they do include the structure name, such as in the return type for the following function, where it's "struct point" and not just "struct". Why do they include the name in some cases and not in others? /* makepoint: make a point from x and y components */ struct point makepoint(int x, int y) { struct point temp; temp.x = x; temp.y = y; return temp; }

    Read the article

  • Has inheritance become bad?

    - by mafutrct
    Personally, I think inheritance is a great tool, that, when applied reasonably, can greatly simplify code. However, I seems to me that many modern tools dislike inheritance. Let's take a simple example: Serialize a class to XML. As soon as inheritance is involved, this can easily turn into a mess. Especially if you're trying to serialize a derived class using the base class serializer. Sure, we can work around that. Something like a KnownType attribute and stuff. Besides being an itch in your code that you have to remember to update every time you add a derived class, that fails, too, if you receive a class from outside your scope that was not known at compile time. (Okay, in some cases you can still work around that, for instance using the NetDataContract serializer in .NET. Surely a certain advancement.) In any case, the basic principle still exists: Serialization and inheritance don't mix well. Considering the huge list of programming strategies that became possible and even common in the past decade, I feel tempted to say that inheritance should be avoided in areas that relate to serialization (in particular remoting and databases). Does that make sense? Or am messing things up? How do you handle inheritance and serialization?

    Read the article

  • WCF - Return object without serializing?

    - by Mayo
    One of my WCF functions returns an object that has a member variable of a type from another library that is beyond my control. I cannot decorate that library's classes. In fact, I cannot even use DataContractSurrogate because the library's classes have private member variables that are essential to operation (i.e. if I return the object without those private member variables, the public properties throw exceptions). If I say that interoperability for this particular method is not needed (at least until the owners of this library can revise to make their objects serializable), is it possible for me to use WCF to return this object such that it can at least be consumed by a .NET client? How do I go about doing that? Update: I am adding pseudo code below... // My code, I have control [DataContract] public class MyObject { private TheirObject theirObject; [DataMember] public int SomeNumber { get { return theirObject.SomeNumber; } // public property exposed private set { } } } // Their code, I have no control public class TheirObject { private TheirOtherObject theirOtherObject; public int SomeNumber { get { return theirOtherObject.SomeOtherProperty; } set { // ... } } } I've tried adding DataMember to my instance of their object, making it public, using a DataContractSurrogate, and even manually streaming the object. In all cases, I get some error that eventually leads back to their object not being explicitly serializable.

    Read the article

  • Does this code depend on string interning to work?

    - by Nick Gotch
    I'm creating a key for a dictionary which is a structure of two strings. When I test this method in a console app, it works, but I'm not sure if the only reason it works is because the strings are being interned and therefore have the same references. Foo foo1 = new Foo(); Foo foo2 = new Foo(); foo1.Key1 = "abc"; foo2.Key1 = "abc"; foo1.Key2 = "def"; foo2.Key2 = "def"; Dictionary<Foo, string> bar = new Dictionary<Foo, string>(); bar.Add(foo1, "found"); if(bar.ContainsKey(foo2)) System.Console.WriteLine("This works."); else System.Console.WriteLine("Does not work"); The struct is simply: public struct Foo { public string Key1; public string Key2; } Are there any cases which would cause this to fail or am I good to rely on this as a unique key?

    Read the article

  • Looking for out-of-place directories in an SVN working copy?

    - by jthg
    An annoyance that I sometimes come across with SVN is the working copy getting corrupted by one of the .svn folders getting moved from its original location. It doesn't happen often if you're careful and use the proper tools for all moves and renames, but it still somehow happens from time to time. First, does anyone know if there's a good way to catch the problem before a commit is even done? Cruise control usually catches the problem, but there are plenty of cases it wouldn't catch. Second, is there a quick and easy way to check for out-of-place .svn folder if I suspect that there is one? I can definitely do it manually by deducing what directory is out of place based on the compiler errors or by diffing the working copy with another clean checkout. But, this seems like a problem that SVN can diagnose in a second by giving me a list of all directories whose parent directory in the working copy doesn't match its parent directory in the repository. There there some way to have SVN give me a list like that? Thanks.

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • Beginner having difficulty with SQL query

    - by Vulcanizer
    Hi, I've been studying SQL for 2 weeks now and I'm preparing for an SQL test. Anyway I'm trying to do this question: For the table: 1 create table data { 2 id int, 3 n1 int not null, 4 n2 int not null, 5 n3 int not null, 6 n4 int not null, 7 primary key (id) 8 } I need to return the relation with tuples (n1, n2, n3) where all the corresponding values for n4 are 0. The problem asks me to solve it WITHOUT using subqueries(nested selects/views) It also gives me an example table and the expected output from my query: 01 insert into data (id, n1, n2, n3, n4) 02 values (1, 2,4,7,0), 03 (2, 2,4,7,0), 04 (3, 3,6,9,8), 05 (4, 1,1,2,1), 06 (5, 1,1,2,0), 07 (6, 1,1,2,0), 08 (7, 5,3,8,0), 09 (8, 5,3,8,0), 10 (9, 5,3,8,0); expects (2,4,7) (5,3,8) and not (1,1,2) since that has a 1 in n4 in one of the cases. The best I could come up with was: 1 SELECT DISTINCT n1, n2, n3 2 FROM data a, data b 3 WHERE a.ID <> b.ID 4 AND a.n1 = b.n1 5 AND a.n2 = b.n2 6 AND a.n3 = b.n3 7 AND a.n4 = b.n4 8 AND a.n4 = 0 but I found out that also prints (1,1,2) since in the example (1,1,2,0) happens twice from IDs 5 and 6. Any suggestions would be really appreciated.

    Read the article

  • Refactoring a complicated if-condition

    - by kumar kasimala
    Hi all, Can anyone suggest best way to avoid most if conditions? I have below code, I want avoid most of cases if conditions, how to do it ? any solution is great help; if (adjustment.adjustmentAccount.isIncrease) { if (adjustment.increaseVATLine) { if (adjustment.vatItem.isSalesType) { entry2.setDebit(adjustment.total); entry2.setCredit(0d); } else { entry2.setCredit(adjustment.total); entry2.setDebit(0d); } } else { if (adjustment.vatItem.isSalesType) { entry2.setCredit(adjustment.total); entry2.setDebit(0d); } else { entry2.setDebit(adjustment.total); entry2.setCredit(0d); } } } else { if (adjustment.increaseVATLine) { if (adjustment.vatItem.isSalesType) { entry2.setCredit(adjustment.total); entry2.setDebit(0d); } else { entry2.setDebit(adjustment.total); entry2.setCredit(0d); } } else { if (adjustment.vatItem.isSalesType) { entry2.setDebit(adjustment.total); entry2.setCredit(0d); } else { entry2.setCredit(adjustment.total); entry2.setDebit(0d); } } }

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >