Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 284/328 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Request for advice about class design, inheritance/aggregation

    - by Lasse V. Karlsen
    I have started writing my own WebDAV server class in .NET, and the first class I'm starting with is a WebDAVListener class, modelled after how the HttpListener class works. Since I don't want to reimplement the core http protocol handling, I will use HttpListener for all its worth, and thus I have a question. What would the suggested way be to handle this: Implement all the methods and properties found inside HttpListener, just changing the class types where it matters (ie. the GetContext + EndGetContext methods would return a different class for WebDAV contexts), and storing and using a HttpListener object internally Construct WebDAVListener by passing it a HttpListener class to use? Create a wrapper for HttpListener with an interface, and constrct WebDAVListener by passing it an object implementing this interface? If going the route of passing a HttpListener (disguised or otherwise) to the WebDAVListener, would you expose the underlying listener object through a property, or would you expect the program that used the class to keep a reference to the underlying HttpListener? Also, in this case, would you expose some of the methods of HttpListener through the WebDAVListener, like Start and Stop, or would you again expect the program that used it to keep the HttpListener reference around for all those things? My initial reaction tells me that I want a combination. For one thing, I would like my WebDAVListener class to look like a complete implementation, hiding the fact that there is a HttpListener object beneath it. On the other hand, I would like to build unit-tests without actually spinning up a networked server, so some kind of mocking ability would be nice to have as well, which suggests I would like the interface-wrapper way. One way I could solve this would be this: public WebDAVListener() : WebDAVListener(new HttpListenerWrapper()) { } public WebDAVListener(IHttpListenerWrapper listener) { } And then I would implement all the methods of HttpListener (at least all those that makes sense) in my own class, by mostly just chaining the call to the underlying HttpListener object. What do you think? Final question: If I go the way of the interface, assuming the interface maps 1-to-1 onto the HttpListener class, and written just to add support for mocking, is such an interface called a wrapper or an adapter?

    Read the article

  • Slightly different execution times between python2 and python3

    - by user557634
    Hi. Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4"). I was curious about the differences in execution time of it between python2 and python3. Here is my function: def perms(s): s = tuple(s) N = len(s) if N <= 1: yield s[:] raise StopIteration() for x in perms(s[1:]): for i in range(0,N): yield x[:i] + (s[0],) + x[i:] I tested both using timeit module. My tests: $ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py python2.6: args time[ms] 1 0.003811 2 0.008268 3 0.015907 4 0.042646 5 0.166755 6 0.908796 7 6.117996 8 48.346996 9 433.928967 10 4379.904032 python3: args time[ms] 1 0.00246778964996 2 0.00656183719635 3 0.01419159912 4 0.0406293644678 5 0.165960511097 6 0.923101452814 7 6.24257639835 8 53.0099868774 9 454.540967941 10 4585.83498001 As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better. As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2? Thank you in advance for kind answer :)

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • Automated test, build and deploy

    - by mike79
    I have visual studio team suite 2008. I was unable to meet the requirements to setup TFS, so I'm using TortoiseSvn and VisualSvn as my version contol in VSTS. I need the system setup to do the following: I neeed to be able to create and track workitems. When updates are made to the current project worked on in VSTS, the updates will be commited back to version control. Tests will be run to see that updates don't break the application. If there's a problem with the update it will be reported back to the developer. If there's no problem with the app, which is a clickonce application, it will automatically be built and deployed to an ftp server. I've never worked with version control, build servers, automated testing and continous intergration. I need to know what needs to be put in place for this type of system. I don't know which combination/stack I should be using: CC.net, TeamCity, Hudson, NAnt, NUnit, MsTest, Trac, BugTracker.net, Ndepend, VisualSvn Server, Perforce, Msdeploy, SCM. I want something that is free/opensource and relatively easy to setup and use. Please suggest a setup that will fit my needs. Any help appreciated

    Read the article

  • How to access variables in shared memory

    - by user1723361
    I am trying to create a shared memory segment containing three integers and an array. The segment is created and a pointer is attached, but when I try to access the values of the variables (whether changing, printing, etc.) I get a segmentation fault. Here is the code I tried: #include <stdio.h> #include <stdbool.h> #include <stdlib.h> #include <errno.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h> #define SIZE 10 int* shm_front; int* shm_end; int* shm_count; int* shm_array; int shm_size = 3*sizeof(int) + sizeof(shm_array[SIZE]); int main(int argc, char* argsv[]) { int shmid; //create shared memory segment if((shmid = shmget(IPC_PRIVATE, shm_size, 0644)) == -1) { printf("error in shmget"); exit(1); } //obtain the pointer to the segment if((shm_front = (int*)shmat(shmid, (void *)0, 0)) == (void *)-1) { printf("error in shmat"); exit(1); } //move down the segment to set the other pointers shm_end = shm_front + 1; shm_count = shm_front + 2; shm_array = shm_front + 3; //tests on shm //*shm_end = 10; //gives segmentation fault //printf("\n%d", *shm_front); //gives segmentation fault //clean-up //get rid of shared memory shmdt(shm_front); shmctl(shmid, IPC_RMID, NULL); //printf("\n\n"); return 0; } I tried accessing the shared memory by dereferencing the pointer to the struct, but got a segmentation fault each time.

    Read the article

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • Combining XmlWriter objects?

    - by Kevin
    The way my application is structured, each component generates output as XML and returns an XmlWriter object. Before rendering the final output to the page, I combine all XML and perform an XSL transformation on that object. Below, is a simplified code sample of the application structure. Does it make sense to combine XmlWriter objects like this? Is there a better way to structure my application? The optimal solution would be one where I didn't have to pass a single XmlWriter instance as a parameter to each component. function page1Xml() { $content = new XmlWriter(); $content->openMemory(); $content->startElement('content'); $content->text('Sample content'); $content->endElement(); return $content; } function generateSiteMap() { $sitemap = new XmlWriter(); $sitemap->openMemory(); $sitemap->startElement('sitemap'); $sitemap->startElement('page'); $sitemap->writeAttribute('href', 'page1.php'); $sitemap->text('Page 1'); $sitemap->endElement(); $sitemap->endElement(); return $sitemap; } function output($content) { $doc = new XmlWriter(); $doc->openMemory(); $doc->writePi('xml-stylesheet', 'type="text/xsl" href="template.xsl"'); $doc->startElement('document'); $doc->writeRaw( generateSiteMap()->outputMemory() ); $doc->writeRaw( $content->outputMemory() ); $doc->endElement(); $doc->endDocument(); $output = xslTransform($doc); return $output; } $content = page1Xml(); echo output($content); Update: I may abandon XmlWriter altogether and use DomDocument instead. It is more flexible and it also seemed to perform better (at least on the crude tests I created).

    Read the article

  • Any suggestions to improve my PDO connection class?

    - by Scarface
    Hey guys I am pretty new to pdo so I basically just put together a simple connection class using information out of the introductory book I was reading but is this connection efficient? If anyone has any informative suggestions, I would really appreciate it. class PDOConnectionFactory{ public $con = null; // swich database? public $dbType = "mysql"; // connection parameters public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } }

    Read the article

  • How do you handle the tension between refactoring and the need for merging?

    - by Xavier Nodet
    Hi, Our policy when delivering a new version is to create a branch in our VCS and handle it to our QA team. When the latter gives the green light, we tag and release our product. The branch is kept to receive (only) bug fixes so that we can create technical releases. Those bug fixes are subsequently merged on the trunk. During this time, the trunk sees the main development work, and is potentially subject to refactoring changes. The issue is that there is a tension between the need to have a stable trunk (so that the merge of bug fixes succeed -- it usually can't if the code has been e.g. extracted to another method, or moved to another class) and the need to refactor it when introducing new features. The policy in our place is to not do any refactoring before enough time has passed and the branch is stable enough. When this is the case, one can start doing refactoring changes on the trunk, and bug-fixes are to be manually committed on both the trunk and the branch. But this means that developpers must wait quite some time before committing on the trunk any refactoring change, because this could break the subsequent merge from the branch to the trunk. And having to manually port bugs from the branch to the trunk is painful. It seems to me that this hampers development... How do you handle this tension? Thanks.

    Read the article

  • Converting Source ASCII Files to JPEGs

    - by CommonsWare
    I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way. The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain. My current mechanism to do that involves: Running expand to set a consistent 2-space tab size, which pipes to... a2ps, which pipes to... A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to... ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc. That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is. Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism. So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)! Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now. Thanks in advance!

    Read the article

  • Why isn't our c# graphics code working any more?

    - by Jared
    Here's the situation: We have some generic graphics code that we use for one of our projects. After doing some clean-up of the code, it seems like something isn't working anymore (The graphics output looks completely wrong). I ran a diff against the last version of the code that gave the correct output, and it looks like we changed one of our functions as follows: static public Rectangle FitRectangleOld(Rectangle rect, Size targetSize) { if (rect.Width <= 0 || rect.Height <= 0) { rect.Width = targetSize.Width; rect.Height = targetSize.Height; } else if (targetSize.Width * rect.Height > rect.Width * targetSize.Height) { rect.Width = rect.Width * targetSize.Height / rect.Height; rect.Height = targetSize.Height; } else { rect.Height = rect.Height * targetSize.Width / rect.Width; rect.Width = targetSize.Width; } return rect; } to static public Rectangle FitRectangle(Rectangle rect, Size targetSize) { if (rect.Width <= 0 || rect.Height <= 0) { rect.Width = targetSize.Width; rect.Height = targetSize.Height; } else if (targetSize.Width * rect.Height > rect.Width * targetSize.Height) { rect.Width *= targetSize.Height / rect.Height; rect.Height = targetSize.Height; } else { rect.Height *= targetSize.Width / rect.Width; rect.Width = targetSize.Width; } return rect; } All of our unit tests are all passing, and nothing in the code has changed except for some syntactic shortcuts. But like I said, the output is wrong. We'll probably just revert back to the old code, but I'm curious if anyone has any idea what's going on here. Thanks.

    Read the article

  • Optimization of SQL query regarding pair comparisons

    - by InfiniteSquirrel
    Hi, I'm working on a pair comparison site where a user loads a list of films and grades from another site. My site then picks two random movies and matches them against each other, the user selects the better of the two and a new pair is loaded. This gives a complete list of movies ordered by whichever is best. The database contains three tables; fm_film_data - this contains all imported movies fm_film_data(id int(11), imdb_id varchar(10), tmdb_id varchar(10), title varchar(255), original_title varchar(255), year year(4), director text, description text, poster_url varchar(255)) fm_films - this contains all information related to a user, what movies the user has seen, what grades the user has given, as well as information about each film's wins/losses for that user. fm_films(id int(11), user_id int(11), film_id int(11), grade int(11), wins int(11), losses int(11)) fm_log - this contains records of every duel that has occurred. fm_log(id int(11), user_id int(11), winner int(11), loser int(11)) To pick a pair to show the user, I've created a mySQL query that checks the log and picks a pair at random. SELECT pair.id1, pair.id2 FROM (SELECT part1.id AS id1, part2.id AS id2 FROM fm_films AS part1, fm_films AS part2 WHERE part1.id <> part2.id AND part1.user_id = [!!USERID!!] AND part2.user_id = [!!USERID!!]) AS pair LEFT JOIN (SELECT winner AS id1, loser AS id2 FROM fm_log WHERE fm_log.user_id = [!!USERID!!] UNION SELECT loser AS id1, winner AS id2 FROM fm_log WHERE fm_log.user_id = [!!USERID!!]) AS log ON pair.id1 = log.id1 AND pair.id2 = log.id2 WHERE log.id1 IS NULL ORDER BY RAND() LIMIT 1 This query takes some time to load, about 6 seconds in our tests with two users with about 800 grades each. I'm looking for a way to optimize this but still limit all duels to appear only once. The server runs MySQL version 5.0.90-community.

    Read the article

  • Why isn't this company contacting me? [closed]

    - by Alan
    I had a phone screen the other day with a company that I really want to work for. It went pretty well, based on cues from the HR person, such as "Next step we're going to send you a programming test," and "Well, before I get ahead of myself, do you want to continue the interviewing process." and "We'll send out the test later this afternoon. It doesn't sound like you'll have trouble with it, but I want to be honest we do have a high failure rate on it." The questions asked weren't technical, just going down my resume, and talking about the work I've done, and how it relates to the position. Nothing I couldn't talk through. This was last Thursday. It's now Tuesday, and haven't received the test yet. I sent a follow up email yesterday to the lady who interviewed me, but haven't gotten a response. Anyone had a similar experience? Am I reading too much into this? Or was I off the mark by thinking I had moved on to the next step in the interview process. Since this is a company I really want to work for, I'm driving myself insane enumerating all the various what-if scenarios.

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Compile for mixed platform (32, 64) and reference a 32 or 64 bit DLL resolved at runtime

    - by Nigel Aston
    Using VS2010 under windows 32 or 64 bit. Our C# app calls a 3rd party DLL (managed) that interfaces to an unmanaged DLL. The 3rd party DLL API appears identical in 32 or 64 bit although underneath it links to a 32 or 64 bit unmanaged DLL. We want our C# app to run on either 32 or 64 bit OS, ideally it will auto detect the OS and load the appropriate 32rd party DLL - via a simple factory class which tests the Enviroment. So the neatest solution would be a runtime folder containing: OurApp.exe 3rdParty32.DLL 3rdPartyUnmanaged32.DLL 3rdParty64.DLL 3rdPartyUnmanaged64.DLL However, the interface for the managed 3rdParty 32 and 64 dll is identical so both cannot be referenced within the same VS2010 project: when adding the second the warning triangle is shown and it does not get referenced. Is my only answer to create two extra library DLL projects to reference the 3rdParty 32 and 64 Dlls? So I would end up with this project arrangement: Project 1: Builds OurApp.exe, dynamically creates an object for project2 or project3. Project 2: Builds OurApp32.DLL which references 3rdParty32.dll Project 3: Builds OurApp64.DLL which references 3rdParty64.dll

    Read the article

  • Unit Testing - Algorithm or Sample based ?

    - by ohadsc
    Say I'm trying to test a simple Set class public IntSet : IEnumerable<int> { Add(int i) {...} //IEnumerable implementation... } And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example: //OPTION 1 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //3 will be added 3 times var values = new List<int> {1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I know 3 is the only candidate to appear multiple times int counter = 0; foreach (int i in set) if (i == 3) counter++; Assert.AreEqual(1, counter); } My second option is to test for my condition generically: //OPTION 2 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //The following could even be a list of random numbers with a duplicate var values = new List<int> { 1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I am not using my prior knowledge of the sample data //the following line would work for any data CollectionAssert.AreEquivalent(new HashSet<int>(values), set); } Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This is the situation when you are testing your real life custom business logic. Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general" EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc )

    Read the article

  • Absence of property syntax in Java

    - by Vojislav Stojkovic
    C# has syntax for declaring and using properties. For example, one can declare a simple property, like this: public int Size { get; set; } One can also put a bit of logic into the property, like this: public string SizeHex { get { return String.Format("{0:X}", Size); } set { Size = int.Parse(value, NumberStyles.HexNumber); } } Regardless of whether it has logic or not, a property is used in the same way as a field: int fileSize = myFile.Size; I'm no stranger to either Java or C# -- I've used both quite a lot and I've always missed having property syntax in Java. I've read in this question that "it's highly unlikely that property support will be added in Java 7 or perhaps ever", but frankly I find it too much work to dig around in discussions, forums, blogs, comments and JSRs to find out why. So my question is: can anyone sum up why Java isn't likely to get property syntax? Is it because it's not deemed important enough when compared to other possible improvements? Are there technical (e.g. JVM-related) limitations? Is it a matter of politics? (e.g. "I've been coding in Java for 50 years now and I say we don't need no steenkin' properties!") Is it a case of bikeshedding?

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • Best way to distribute form that can be printed or saved?

    - by Jason Antman
    I need to develop a simple form (intended only for printing) to be filled in by arbitrary end users (i.e. no specialized software). Ideally, I'd like the end-user to be able to save their inputs to the form and update it periodically. It seems that (at least without LiveCycle Enterprise Suite) Adobe Reader won't save data input in a PDF form. Aside from just distributing the form as a Word document, does anyone have any suggestions? Background: I do some work for a volunteer ambulance corps. They have a lot of elderly patients who don't know (or can't remember) their medical history. They want to develop a common form with personal information (name, address, DOB, medications list, etc.) for elderly residents to hang on their refrigerators (apparently a common solution to this problem). As some of them (or their children/grandchildren) are computer literate, it would make most sense to provide a download-able blank form that can be filled in, saved, updated, and re-printed as needed. Due to worries about privacy, HIPAA, etc. anything with server-side generation is out, it needs to be 100% client-side, and in a format that the majority of non-technical computer users can access without additional software. Thanks for any tips... at this point, I'm leaning towards just using a .doc form.

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • How to rewrite data-driven test suites of JUnit 3 in Junit 4?

    - by rics
    I am using data-driven test suites running JUnit 3 based on Rainsberger's JUnit Recipes. The purpose of these tests is to check whether a certain function is properly implemented related to a set of input-output pairs. Here is the definition of the test suite: public static Test suite() throws Exception { TestSuite suite = new TestSuite(); Calendar calendar = GregorianCalendar.getInstance(); calendar.set(2009, 8, 05, 13, 23); // 2009. 09. 05. 13:23 java.sql.Date date = new java.sql.Date(calendar.getTime().getTime()); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.YYYY_MON_DD, "2009-SEP-05")); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.DD_MON_YYYY, "05/SEP/2009")); return suite; } and the definition of the testing class: public class DateFormatTestToString extends TestCase { private java.sql.Date date; private JtDateFormat.FormatType dateFormat; private String expectedStringFormat; public DateFormatTestToString(java.sql.Date date, JtDateFormat.FormatType dateFormat, String expectedStringFormat) { super("testGetString"); this.date = date; this.dateFormat = dateFormat; this.expectedStringFormat = expectedStringFormat; } public void testGetString() { String result = JtDateFormat.getString(date, dateFormat); assertTrue( expectedStringFormat.equalsIgnoreCase(result)); } } How is it possible to test several input-output parameters of a method using JUnit 4? This question and the answers explained to me the distinction between JUnit 3 and 4 in this regard. This question and the answers describe the way to create test suite for a set of class but not for a method with a set of different parameters.

    Read the article

  • How to make Spring load a JDBC Driver BEFORE initializing Hibernate's SessionFactory?

    - by Bill_BsB
    I'm developing a Spring(2.5.6)+Hibernate(3.2.6) web application to connect to a custom database. For that I have custom JDBC Driver and Hibernate Dialect. I know for sure that these custom classes work (hard coded stuff on my unit tests). The problem, I guess, is with the order on which things get loaded by Spring. Basically: Custom Database initializes Spring load beans from web.xml Spring loads ServletBeans(applicationContext.xml) Hibernate kicks in: shows version and all the properties correctly loaded. Hibernate's HbmBinder runs (maps all my classes) LocalSessionFactoryBean - Building new Hibernate SessionFactory DriverManagerConnectionProvider - using driver: MyCustomJDBCDriver at CustomDBURL I get a SQLException: No suitable driver found for CustomDBURL Hibernate loads the Custom Dialect My CustomJDBCDriver finally gets registered with DriverManager (log messages) SettingsFactory runs SchemaExport runs (hbm2ddl) I get a SQLException: No suitable driver found for CustomDBURL (again?!) Application get successfully deployed but there are no tables on my custom Database. Things that I tried so far: Different techniques for passing hibernate properties: embedded in the 'sessionFactory' bean, loaded from a hibernate.properties file. Nothing worked but I didn't try with hibernate.cfg.xml file neither with a dataSource bean yet. MyCustomJDBCDriver has a static initializer block that registers it self with the DriverManager. Tried different combinations of lazy initializing (lazy-init="true") of the Spring beans but nothing worked. My custom JDBC driver should be the first thing to be loaded - not sure if by Spring but...! Can anyone give me a solution for this or maybe a hint for what else I could try? I can provide more details (huge stack traces for instance) if that helps. Thanks in advance.

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >