Search Results

Search found 265 results on 11 pages for 'regression'.

Page 9/11 | < Previous Page | 5 6 7 8 9 10 11  | Next Page >

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • Is there a good digraph layout library callable from C++?

    - by Steve314
    The digraphs represent finite automata. Up until now my test program has been writing out dot files for testing. This is pretty good both for regression testing (keep the verified output files in subversion, ask it if there has been a change) and for visualisation. However, there are some problems... Basically, I want something callable from C++ and which plans a layout for my states and transitions but leaves the drawing to me - something that will allow me to draw things however I want and draw on GUI (wxWidgets) windows. I also want a license which will allow commercial use - I don't need that at present, and I may very well release as open source, but I don't want to limit my options ATM. The problems with GraphViz are (1) the warnings about building from source on Windows, (2) all the unnecessary dependencies for rendering and parsing, and (3) the (presumed) lack of a documented API specifically and purely for layout. Basically, I want to be able to specify my states (with bounding rectangle sizes) and transitions, and read out positions for the states and waypoints for each transition, then draw based on those co-ordinates myself. I haven't really figured out how annotations on transitions should be handled, but there should be some kind of provision for specifying bounding-box-sizes for those, associating them with transitions, and reading out positions. Does anyone know of a library that can handle those requirements? I'm not necessarily against implementing something for myself, but in this case I'd rather avoid it if possible.

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • using load instead of other I/O command

    - by Amadou
    How can I modify this program using load-ascii command to read (x,y)? n=0; sum_x = 0; sum_y = 0; sum_x2 = 0; sum_xy = 0; disp('This program performs a least-squares fit of an'); disp('input data set to a straight line. Enter the name'); disp('of the file containing the input (x,y) pairs: '); filename = input(' ','s'); [fid,msg] = fopen(filename,'rt'); if fid<0 disp(msg); else [in,count]=fscanf(fid, '%g %g',2); while ~feof(fid) x=in(1); y=in(2); n=n+1; sum_x=sum_x+x; sum_y=sum_y+y; sum_x2=sum_x2+x.^2; sum_xy=sum_xy+x*y; [in,count] = fscanf(fid, '%f',[1 2]); end fclose(fid); x_bar = sum_x / n; y_bar = sum_y / n; slope = (sum_xy - sum_x*y_bar) / (sum_x2 - sum_x*x_bar); y_int = y_bar - slope * x_bar; fprintf('Regression coefficients for the least-squares line:\n'); fprintf('Slope (m) =%12.3f\n',slope); fprintf('Intercept (b) =%12.3f\n',y_int); fprintf('No of points =%12d\n',n); end

    Read the article

  • Git. Checkout feature branch between merge commits

    - by mageslayer
    Hi all It's kind weird, but I can't fulfill a pretty common operation with git. Basically what I want is to checkout a feature branch, not using it's head but using SHA id. This SHA points between merges from master branch. The problem is that all I get is just master branch without a commits from feature branch. Currently I'm trying to fix a regression introduced earlier in master branch. Just to be more descriptive, I crafted a small bash script to recreate a problem repository: #!/bin/bash rm -rf ./.git git init echo "test1" > test1.txt git add test1.txt git commit -m "test1" -a git checkout -b patches master echo "test2" > test2.txt git add test2.txt git commit -m "test2" -a git checkout master echo "test3" > test3.txt git add test3.txt git commit -m "test3" -a echo "test4" > test4.txt git add test4.txt git commit -m "test4" -a echo "test5" > test5.txt git add test5.txt git commit -m "test5" -a git checkout patches git merge master #Now how to get a branch having all commits from patches + test3.txt + test4.txt - test5.txt ??? Basically all I want is just to checkout branch "patches" with files 1-4, but not including test5.txt. Doing: git checkout [sha_where_test4.txt_entered] ... just gives a branch with test1,test3,test4, but excluding test2.txt Thanks.

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • Research and replace Word Rtf

    - by Perello
    I'm working on an application which has a workflow for postal mails. These postal mails are generated according to my application business rules. Models are in html or Rtf and it works perfectly as long the user do not create the rtf with word. This is not within the specs, but my hierarchy would welcome a Word compatibility if it don't involve too much work, and it would please and ease the life of our customer. The Rtf models have tags which are replaced by application values. In most RTF, tags are not splitted, so the search and replace works perfectly. I wish to be handle word with few modifications. Example data : [[FooBuzz]] in most rtf it's not splited. In word 2003 : {\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 [[}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid2708730 FooBuzz}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 ]]} And their word (word 2007) splitted also Foo{garbage inside} Buzz. So i wish to be able to handle common RTF perfectly, and detect tags even if they are splitted. I have 2 constraints. First no regression, second it has to stay simple. Performance is not an issue here. I'm using symfony 1.4. The actual relevant research code part : $regExpression = '/\[\[([^\[\]]*)\]\]/'; preg_match_all($regExpression, $sTemplate, $outKeys);

    Read the article

  • How do you tell that your unit tests are correct?

    - by Jacob Adams
    I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem? Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing. Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

    Read the article

  • How to generate a monotone MART ROC in R?

    - by user1521587
    I am using R and applying MART (Alg. for multiple additive regression trees) on a training set to build prediction models. When I look at the ROC curve, it is not monotone. I would be grateful if someone can help me with how I should fix this. I am guessing the issue is that initially, MART generates n trees and if these trees are not the same for all the models I am building, the results will not be comparable. Here are the steps I take: 1) Fix the false-negative cost, c_fn. Let cost = c(0, 1, c_fn, 0). 2) use the following line to build the mart model: mart(x, y, lx, martmode='class', niter=2000, cost.mtx=cost) where x is the matrix of training set variables, y is the observation matrix, lx is the matrix which specifies which of the variables in x is numerical, which one categorical. 3) I predict the test set observations using the mart model found in step 2 using this line: y_pred = martpred(x_test, probs=T) 4) I compute the false-positive and false-negative errors as follows: t = 1/(1+c_fn) %threshold based on Bayes optimal rule where c_fp=1 and c_fn. p_0 = length(which(y_test==1))/dim(y_test)[1] p_01 = sum(1*(y_pred[,2]t & y_test==0))/dim(y_test)[1] p_11 = sum(1*(y_pred[,2]t & y_test==1))/dim(y_test)[1] p_fp = p_01/(1-p_0) p_tp = p_11/p_0 5) repeat step 1-4 for a new false-negative cost.

    Read the article

  • is there a signal emiter/consumer engine (like in Django) for .NET (C#)

    - by user118657
    Has .NET (C#) anything like Django's Signals engine? Our business logic become really complicated over few years of adding new features. I'm going to re-architecture it. Currently all features are very coupled that makes regression errors while changing something one one place - some other place may be broken. I really like Django's apps idea where separate applications introduce new functionality and are absolutely separate. Communication between apps is implemented though signals. I wounder if there is something in .NET that allows to divide project business to many separated "apps" (plug-ins, zones, modules, you name it) and make communication using some kind of "signals". For example we have simple order flow. We can add "coupon app" that if exists in the project adds abilities to use discount coupon. We can add "cross sale" module that if exists adds abilities to offer cross-sale products Email notification module that if exists adds abilities to send order email notifications. But in the same time all this modules are "self-contained" means that communication between them is done using emitting signals (ORDER_PROCCESS_START, ORDER_SUCCESS, etcs) and other modules can subscribe to this signals and process them in required way. This architecture is not related to web, all business logic is processed on the server side like without working with HTTP directly. I wonder if it's good architecture from code maintaining and testing point of few, is it possible to do this in .NET? Any drawbacks that I don't realize now?

    Read the article

  • Is there any way to configure what reCAPTCHA is actually displaying?

    - by trejder
    Is there any way to control, what kind of image is displayed to user in reCAPTCHA or what kind of puzzle he/she is required to solve? I noticed at least two significant changes to what reCAPTCHA is serving (and I must admit, that I don't much like these changes): For years reCAPTCHA was serving two words from scanned books and user was required to solve one of them. They were clearly readable (even those "second" ones, that could be ommitted) and with nearly no problem in solving them by a human. For past few month, I noticed a significant change at all of my sites, that are using reCAPTCHA. They started to show combination of computer-generated long numbers string and something, that looks for me as street/house number photographed in Google StreetView. They're even easier to solve, but what is most important -- it started to happen more and more often that user is obligated to solve both of them. Now, I have noticed another change/regression. Now some of my sites remain at so called "level 2" (like above) and some of them started to serve two words again ("level 1"?). And again, there are more and more situations, where solving both words is required. But, what is most important, on this "level" words are nearly impossible to solve (on my old mobile devices with 3.5'' display I need 5-6 attempts to pass on!). They're cluttered, written in some strange font, mostly in italics with a lot of black and white stains or drops on letters etc. Plus: reCAPTCHA stopped to be equal -- some of my pages are still serving "level2" while some of them are "killing" end users with a need to solve "level3". Is there anyway, I can control this -- force it to use only "level2" and on all my pages? (of course, I'm using exactly the same piece of code to serve reCAPTCHA on all my pages) Note, that I'm not asking for something like in this question. I don't want to change what reCAPTCHA shows (to disable words in favor of only numbers for example). I only want to control, which "version" of puzzles (among described above) reCAPTCHA shows and I want to make it equal on all my sites.

    Read the article

  • Breaking the Outlook 2010 e-mail blue quote line for inline responses

    - by Jez
    This has to be the most infuriating regression from Outlook 2003 to 2007. It also exists the same in Outlook 2010, as far as I can tell. When you reply to an HTML e-mail message in Outlook, the quoted text has a blue line down the side, and is usually at the bottom of the message: Now in Outlook 2003, when replying to HTML-formatted messages in Outlook, you used to be able to reply inline quite easily, by getting to the point in the quoted message you wanted to reply to, and pressing the 'decrease indent' button: Since Outlook 2007 (and 2010), they replaced the e-mail editor with Microsoft Word. This means the blue line is implemented in a different way; it uses a blue left border. This makes it tougher to break the line up. After much ado, I found a couple of pages that said that you could remove all formatting by pressing ctrl-Q, which would remove the blue line next to the cursor and allow inline replies: OK, not too bad on the face of it. I can live with that. But here's the kick in the teeth; try sending that mail. I'll send it to myself. What do I receive? This: Outlook 2010 reinstated the blue line, where I had removed it, upon my sending the e-mail! For God's sake! The two pages I linked to above don't seem to address Outlook's reinstating of the blue line upon sending. So, does anyone know how you can actually reply inline in Outlook 2010 (or Outlook 2007) e-mail without the blue line being reinstated? Before anyone says, I do not want to convert the message to plaintext, and I do not want to just indent replies and have to manually build the blue line myself. I want something like the Outlook 2003 behaviour; I reply, Outlook creates the blue line, and I can break it up with inline replies, send it, and my inline formatting stays. My hopes aren't high - Microsoft seem to have gone to some trouble to actively prevent inline replies here, for some reason - but I'd appreciate anyone's insights. Cheers!

    Read the article

  • umount bind of stale NFS

    - by Paul Eisner
    i've got a problem removing mounts created with mount -o bind from a locally mounted NFS folder. Assume the following mount structure: NFS mounted directory: $ mount -o rw,soft,tcp,intr,timeo=10,retrans=2,retry=1 \ 10.20.0.1:/srv/source /srv/nfs-source Bound directory: $ mount -o bind /srv/nfs-source/sub1 /srv/bind-target/sub1 Which results in this mount map $ mount /dev/sda1 on / type ext3 (rw,errors=remount-ro) # ... 10.20.0.1:/srv/source on /srv/nfs-source type nfs (rw,soft,tcp,intr,timeo=10,retrans=2,retry=1,addr=10.20.0.100) /srv/nfs-source/sub1 on /srv/bind-target/sub1 type none (rw,bind) If the server (10.20.0.1) goes down (eg ifdown eth0), the handles become stale, which is expected. I can now un-mount the NFS mount with force $ umount -f /srv/nfs-source This takes some seconds, but works without any problems. However, i cannot un-mount the bound directory in /srv/bind-target/sub1. The forced umount results in: $ umount -f /srv/bind-target/sub1 umount2: Stale NFS file handle umount: /srv/bind-target/sub1: Stale NFS file handle umount2: Stale NFS file handle Here is a trace http://pastebin.com/ipvvrVmB I've tried umounting the sub-directories beforehand, find any processes accessing anything within the NFS or bind mounts (there are none). lsof also complains: $ lsof -n lsof: WARNING: can't stat() nfs file system /srv/nfs-source Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/sub1 (deleted) Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/ Output information may be incomplete. I've tried with recent stable Linux kernels 3.2.17, 3.2.19 and 3.3.8 (cannot use 3.4.x, cause need the grsecurity patch, which is not, yet, supported - grsecurity is not patched in in the tests above!). My nfs-utils are version 1.2.2 (debian stable). Does anybody have an idea how i can either: force the un-mount some other way? (any dirty trick is welcome, data loss or damage neglible at this point) use something else instead of mount -o bind? (cannot use soft links, cause mounted directories will be used in chroot; bindfs via FUSE is far to slow to be an option) Thanks, Paul Update 1 With 2.6.32.59 the umount of the (stale) sub-mounts work just fine. It seems to be a kernel regression bug. The above tests where with NFSv3. Additional tests with NFSv4 showed no change. Update 2 We have tested now multiple 2.6 and 3.x kernels and are now sure, that this was introduced in 3.0.x. We will fille a bug report, hopefully they figure it out.

    Read the article

  • Why does unpartitioned Hitachi HDS5C3020 drive start consuming 50% more power 15 minutes after boot?

    - by Pro Backup
    In a Debian 6.0.6 system there are 74 pieces of 2TB Toshiba DT01ABA200 drives. These drives are identified as Hitachi HDS5C3020BLE630 drives running firmware revision MZ4OAAB0. 64 Drives attached via HP SAS expander cards to an LSI 2008 SAS controller, another 5 drives are connected directly to the mainboard, 4 drives are connected to a Sil based PCI controller and last 1 drive is only powered and has no data cable connected. The controller LSI and Sil card's their onboard BIOS are both disabled and the mpt2sas and sata_sil modules are removed from the Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux kernel. The mpt2sas module is loaded after boot using a modprobe command in /etc/rc.local. These 74 drives are not partitioned, neither formatted and also not mounted. The system consumes: with 0 drives: 70.6 - 70.9 Watt (also 15 minutes after boot); with 74 drives: 330 - 360 Watt, just after boot (is equivalent to 3.5 - 3.9W per drive in idle state); with 74 drives: 420 - 466 Watt, each time in the 15th minute of uptime (is equivalent to 4.7 - 5.3W per drive in idle state). The drive specification lists 4.7W as read/write, and 3.3W as idle power consumption. The increased power consumption is most likely on the 5V line, because after roughly 1 minute an "over current protection" (OCP) of the power supply (PSU) shuts down the power. The used PSU is a single rail model with an OCP of 122A on the 12V line and 55A on the 5V line. Regression: It doesn't matter whether the drive its APM value is set to disabled or 1 (maximum power saving). The operating system records no read/write activity in /proc/diskstats. The values there are identical (28 read, 0 write operations) as immediately after the modprobe operation. Can't test what happens when booting into the mainboard it's BIOS - to exclude any OS intervention - because the Super Micro X8SI6-F mainboard running firmware 06/27/12 has a bug that incorrectly reads a +74.0 C CPU sensor temperature as "High" in BIOS mode, and shuts down the power after 1 minute. What might be causing the drive read/write activity on all drives in the 15th minute after boot and how to prevent it from happening?

    Read the article

  • How to Collect Debug Info for Oracle SQL Developer

    - by thatjeffsmith
    In a perfect world, there would be no software bugs. Developers would always test their code. QA would find any scenarios and bugs the developers hadn’t already thought of. Regression tests would be complete and flawless. But alas, we can only afford to pay mere humans here, so we will have bugs from time to time. Or sometimes you are trying to do something the software wasn’t designed for, or perhaps your machine has exhausted it’s resources trying to build the un-buildable. When you run into problems, you will need help. Developers need your help so they can help you. Surprisingly enough, feedback like this isn’t very helpful: Your program isn’t working. How can I make it work? When you are ready to work with us on the SQL Developer OTN forum, you will most likely be asked to run SQL Developer and capture the output from the command console. In case you need help with this, ere’s a step-by-step process you can follow in Windows 7 (should work in XP too.) Open a windows command window Start – Run – CMD Once it’s open, click on the window icon and select ‘Defaults.’ Change the default buffer size to be something bigger, much bigger. Set the CMD window default buffer size HIGHER Note: you only need to do this once. Navigate to your SQL Developer Installation Folder Instead of running the ‘sqldeveloper.exe’ file in the root directory, we are going to go several sub-directories down. Find the ‘bin’ sub-directory and run the ‘sqldeveloper.exe’ there. When you do this, a CMD window will open, and then you’ll see the SQL Developer application load. The SQL Developer bin directory - run the tool from here and get a logging window Use SQL Developer as normal, until it ‘breaks’ or ‘hangs’ Now, you are ready to grab the nitty-gritty information that MIGHT tell the developer what is going wrong or happening in your scenario. Click back into the CMD window Send a Ctrl+Break or a Ctrl+Pause. If you on a newer laptop that doesn’t have this key, be sure to check the ‘Fn’ subset of keys. If you need to map the BREAK or PAUSE buttons, this article might help. You can also try the on-screen keyboard in windows – just type ‘OSK’ in your START – RUN prompt. Copy the logging information from the command window – all of it We need this information, help us get it! Open a case with Oracle Support or Start a Thread on the Forums Or email me. If you’re on my blog reading this, it’s the least I can do to help Now, before you hit ‘Send’ or ‘Post’ or ‘Submit’ – be sure to add a brief description of what you were doing in the application when you ran into the problem. Even if you were doing ‘nothing,’ let us know how many connections you had open, what windows were active, etc. The more you can tell us, the higher your odds go up to getting a quick fix or at least an answer as to what is happening. Also include the following information: The version of SQL Developer you are running The version of the JDK you are using The OS you are using The version of Oracle you are connected to Now, don’t be surprised if you get asked to upgrade to a supported configuration, say ‘version 3.1 and the 1.6 JDK.’ Supporting older versions of software is fun, and while we enjoy a challenge, it may be easier for you to upgrade your way out of the problem at hand.

    Read the article

  • MVVM Light V4 preview 2 (BL0015) #mvvmlight

    - by Laurent Bugnion
    Over the past few weeks, I have worked hard on a few new features for MVVM Light V4. Here is a second early preview (consider this pre-alpha if you wish). The features are unit-tested, but I am now looking for feedback and there might be bugs! Bug correction: Messenger.CleanupList is now thread safe This was an annoying bug that is now corrected: In some circumstances, an exception could be thrown when the Messenger’s recipients list was cleaned up (i.e. the “dead” instances were removed). The method is called now and then and the exception was thrown apparently at random. In fact it was really a multi-threading issue, which is now corrected. Bug correction: AllowPartiallyTrustedCallers prevents EventToCommand to work This is a particularly annoying regression bug that was introduced in BL0014. In order to allow MVVM Light to work in XBAPs too, I added the AllowPartiallyTrustedCallers attribute to the assemblies. However, we just found out that this causes issues when using EventToCommand. In order to allow EventToCommand to continue working, I reverted to the previous state by removing the AllowPartiallyTrustedCallers attribute for now. I will work with my friends at Microsoft to try and find a solution. Stay tuned. Bug correction: XML documentation file is now generated in Release configuration The XML documentation file was not generated for the Release configuration. This was a simple flag in the project file that I had forgotten to set. This is corrected now. Applying EventToCommand to non-FrameworkElements This feature has been requested in order to be able to execute a command when a Storyboard is completed. I implemented this, but unfortunately found out that EventToCommand can only be added to Storyboards in Silverlight 3 and Silverlight 4, but not in WPF or in Windows Phone 7. This obviously limits the usefulness of this change, but I decided to publish it anyway, because it is pretty damn useful in Silverlight… Why not in WPF? In WPF, Storyboards added to a resource dictionary are frozen. This is a feature of WPF which allows to optimize certain objects for performance: By freezing them, it is a contract where we say “this object will not be modified anymore, so do your perf optimization on them without worrying too much”. Unfortunately, adding a Trigger (such as EventTrigger) to an object in resources does not work if this object is frozen… and unfortunately, there is no way to tell WPF not to freeze the Storyboard in the resources… so there is no way around that (at least none I can see. In Silverlight, objects are not frozen, so an EventTrigger can be added without problems. Why not in WP7? In Windows Phone 7, there is a totally different issue: Adding a Trigger can only be done to a FrameworkElement, which Storyboard is not. Here I think that we might see a change in a future version of the framework, so maybe this small trick will work in the future. Workaround? Since you cannot use the EventToCommand on a Storyboard in WPF and in WP7, the workaround is pretty obvious: Handle the Completed event in the code behind, and call the Command from there on the ViewModel. This object can be obtained by casting the DataContext to the ViewModel type. This means that the View needs to know about the ViewModel, but I never had issues with that anyway. New class: NotifyPropertyChanged Sometimes when you implement a model object (for example Customer), you would like to have it implement INotifyPropertyChanged, but without having all the frills of a ViewModelBase. A new class named NotifyPropertyChanged allows you to do that. This class is a simple implementation of INotifyPropertyChaned (with all the overloads of RaisePropertyChanged that were implemented in BL0014). In fact, ViewModelBase inherits NotifyPropertyChanged. ViewModelBase does not implement IDisposable anymore The IDisposable interface and the Dispose method had been marked obsolete in the ViewModelBase class already in V3. Now they have been removed. Note: By this, I do not mean that IDisposable is a bad interface, or that it shouldn’t be used on viewmodels. In the contrary, I know that this interface is very useful in certain circumstances. However, I think that having it by default on every instance of ViewModelBase was sending a wrong message. This interface has a strong meaning in .NET: After Dispose has been executed, the instance should not be used anymore, and should be ready for garbage collection. What I really wanted to have on ViewModelBase was rather a simple cleanup method, something that can be executed now and then during runtime. This is fulfilled by the ICleanup interface and its Cleanup method. If your ViewModels need IDisposable, you can still use it! You will just have to implement the interface on the class itself, because it is not available on ViewModelBase anymore. What’s next? I have a couple exciting new features implemented already but that need more testing before they go live… Just stay tuned and by MIX11 (12-14 April 2011), we should see at least a major addition to MVVM Light Toolkit, as well as another smaller feature which is pretty cool nonetheless More about this later! Happy Coding Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Using Event Driven Programming in games, when is it beneficial?

    - by Arthur Wulf White
    I am learning ActionScript 3 and I see the Event flow adheres to the W3C recommendations. From what I learned events can only be captured by the dispatcher unless, the listener capturing the event is a DisplayObject on stage and a parent of the object firing the event. You can capture the events in the capture(before) or bubbling(after) phase depending on Listner and Event setup you use. Does this system lend itself well for game programming? When is this system useful? Could you give an example of a case where using events is a lot better than going without them? Are they somehow better for performance in games? Please do not mention events you must use to get a game running, like Event.ENTER_FRAME Or events that are required to get input from the user like, KeyboardEvent.KEY_DOWN and MouseEvent.CLICK. I am asking if there is any use in firing events that have nothing to do with user input, frame rendering and the likes(that are necessary). I am referring to cases where objects are communicating. Is this used to avoid storing a collection of objects that are on the stage? Thanks Here is some code I wrote as an example of event behavior in ActionScript 3, enjoy. package regression { import flash.display.Shape; import flash.display.Sprite; import flash.events.Event; import flash.events.EventDispatcher; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.EventPhase; /** * ... * @author ... */ public class Check_event_listening_1 extends Sprite { public const EVENT_DANCE : String = "dance"; public const EVENT_PLAY : String = "play"; public const EVENT_YELL : String = "yell"; private var baby : Shape = new Shape(); private var mom : Sprite = new Sprite(); private var stranger : EventDispatcher = new EventDispatcher(); public function Check_event_listening_1() { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { trace("test begun"); addChild(mom); mom.addChild(baby); stage.addEventListener(EVENT_YELL, onEvent); this.addEventListener(EVENT_YELL, onEvent); mom.addEventListener(EVENT_YELL, onEvent); baby.addEventListener(EVENT_YELL, onEvent); stranger.addEventListener(EVENT_YELL, onEvent); trace("\nTest1 - Stranger yells with no bubbling"); stranger.dispatchEvent(new Event(EVENT_YELL, false)); trace("\nTest2 - Stranger yells with bubbling"); stranger.dispatchEvent(new Event(EVENT_YELL, true)); stage.addEventListener(EVENT_PLAY, onEvent); this.addEventListener(EVENT_PLAY, onEvent); mom.addEventListener(EVENT_PLAY, onEvent); baby.addEventListener(EVENT_PLAY, onEvent); stranger.addEventListener(EVENT_PLAY, onEvent); trace("\nTest3 - baby plays with no bubbling"); baby.dispatchEvent(new Event(EVENT_PLAY, false)); trace("\nTest4 - baby plays with bubbling"); baby.dispatchEvent(new Event(EVENT_PLAY, true)); trace("\nTest5 - baby plays with bubbling but is not a child of mom"); mom.removeChild(baby); baby.dispatchEvent(new Event(EVENT_PLAY, true)); mom.addChild(baby); stage.addEventListener(EVENT_DANCE, onEvent, true); this.addEventListener(EVENT_DANCE, onEvent, true); mom.addEventListener(EVENT_DANCE, onEvent, true); baby.addEventListener(EVENT_DANCE, onEvent); trace("\nTest6 - Mom dances without bubbling - everyone is listening during capture phase(not target and bubble phase)"); mom.dispatchEvent(new Event(EVENT_DANCE, false)); trace("\nTest7 - Mom dances with bubbling - everyone is listening during capture phase(not target and bubble phase)"); mom.dispatchEvent(new Event(EVENT_DANCE, true)); } private function onEvent(e : Event):void { trace("Event was captured"); trace("\nTYPE : ", e.type, "\nTARGET : ", objToName(e.target), "\nCURRENT TARGET : ", objToName(e.currentTarget), "\nPHASE : ", phaseToString(e.eventPhase)); } private function phaseToString(phase : int):String { switch(phase) { case EventPhase.AT_TARGET : return "TARGET"; case EventPhase.BUBBLING_PHASE : return "BUBBLING"; case EventPhase.CAPTURING_PHASE : return "CAPTURE"; default: return "UNKNOWN"; } } private function objToName(obj : Object):String { if (obj == stage) return "STAGE"; else if (obj == this) return "MAIN"; else if (obj == mom) return "Mom"; else if (obj == baby) return "Baby"; else if (obj == stranger) return "Stranger"; else return "Unknown" } } } /*result : test begun Test1 - Stranger yells with no bubbling Event was captured TYPE : yell TARGET : Stranger CURRENT TARGET : Stranger PHASE : TARGET Test2 - Stranger yells with bubbling Event was captured TYPE : yell TARGET : Stranger CURRENT TARGET : Stranger PHASE : TARGET Test3 - baby plays with no bubbling Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Test4 - baby plays with bubbling Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Mom PHASE : BUBBLING Event was captured TYPE : play TARGET : Baby CURRENT TARGET : MAIN PHASE : BUBBLING Event was captured TYPE : play TARGET : Baby CURRENT TARGET : STAGE PHASE : BUBBLING Test5 - baby plays with bubbling but is not a child of mom Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Test6 - Mom dances without bubbling - everyone is listening during capture phase(not target and bubble phase) Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : STAGE PHASE : CAPTURE Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : MAIN PHASE : CAPTURE Test7 - Mom dances with bubbling - everyone is listening during capture phase(not target and bubble phase) Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : STAGE PHASE : CAPTURE Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : MAIN PHASE : CAPTURE */

    Read the article

  • jQuery with SharePoint solutions

    - by KunaalKapoor
    For me jQuery is the 'Plan-B' for everything.And most of my projects include the use of jQuery for something or the other, so I decided to write a small note on what works best while using jQuery along with SharePoint.I prefer to use the jQuery JavaScript library, which is far more robust, easier to use, and allows for plugins. Follow the steps below to add jQuery to your master page. For office 365, the prefered location to add jQuery files is the "Site Asserts" library.Deployment Best PracticesThey are only as good as the context it’s being referenced.  In other words, take into account your world before applying it.Script your deployment options.  Folder in SPD. Use the file system.  Make external references.  The JQuery library is on the Microsoft Ajax Content Delivery Network. You may even choose to publish to and from the document library. (pros and cons to this approach)Reference options when referencing the script.ScriptLink will make sure it’s loaded at the top of the page and only loaded once. You need Visual Studio or SPDContent Editor Web Part (CEWP).  Drop it on the page and it’s there.  Easy but dangerousCustom Actions. Great for global deployments of JQuery.  Loads it on every page. It also works in Sandbox installations.Deployment Maintenance Dont’sDon’t add scripts directly to your Master Page. That’s way too much effort because the pages are hard to maintain.Don’t add scripts directly to the CEWP.  Use a content link instead. That will allow for reuse. If you or someone deletes the CEWP you won’t lose code in the web partSecurity.  Any scripts run with the same privileges of the current user.  In other words, you can’t get in trouble.Development Best PracticesDon’t abuse the DOM.  There are better options to load the DOM without hitting it 1,000 times.User other performance boosters.Try other libraries.  Try some custom codeAvoid String conversionMinify your filesUse CAML to reduce number of returns rowsOnly update your JQuery library AFTER RIGOROUS REGRESSION TESTINGCRUD operations can come with some funSP Services wraps SharePoint’s web services for executionThe Bing SDK is pretty easy to use.  You can add it to your page with a script,  put it into a content editor web part and connect it from the address parameters in a list.Steps:1. Go to jquery.com and download the latest jQuery library to your desktop. You want to get the compressed production version, not the development version.2. Open SharePoint Designer (SPD) and connect to the root level of your site's site collection.In SPD, open the "Style Library" folder. Create a folder named "Scripts" inside of the Style Library. Drag the jQuery library JavaScript file from your desktop into the Scripts folder.In the Scripts folder, create a new JavaScript file and name it (e.g. "actions.js").3. If you are using visual studio add a folder for js, you can create a new folder at the root level or if you prefer more cleaner solutions like me, you can use the layouts folder which cleans out on deactivation/uninstall.4. Within the <head> tag of the master page, add a script reference to the jQuery library just above the content place holder named "PlaceHolderAdditonalPageHead" (and above your custom CSS references, if applicable) as follows:<script src="/Style%20Library/Scripts/{jquery library file}.js" type="text/javascript"></script>Immediately after the jQuery library reference add a script reference to your custom scripts file as follows:<script src="/Style%20Library/Scripts/actions.js" type="text/javascript"></script>Inside your script tag, you can test if jQuery is already defined and if not, then add it to the page.<script type='text/javascript'>  if (typeof jQuery == 'undefined')    document.write('<scr'+'ipt type="text/javascript" src="http://code.jquery.com/jquery-1.6.1.min.js"></sc'+'ript>');</script>For the inquisitive few... Read on if you'd like :)Why jQuery on SharePoiny is AwesomeIt’s all about that visual wow factor.  You can get past that, “But it looks like SharePoint”  Take a long list view and put it into JQuery with pagination, etc and you are the hero.  It’s also about new controls you get with JQuery that you couldn’t do before.Why jQuery with SharePoint should be AwfulAlthough it’s fairly easy to get jQuery up and running. Copy/Paste can cause a problem.  If you don’t understand what it’s doing in the Client Object Model and the Document Object Model then it will do things on your site that were completely unexpected. Many blogs will note workarounds they employed on their sites. Why it’s not working: Debugging “sucks”.You need to develop small blocks of functionality, Test it by putting in some alerts  and console.log. Set breakpoints and monitor the DOM via Firebug and some IE development toolsPerformance - It happens all the time. But you should look at the tradeoffs. More time may give you more functionality.Consistency - ”But it works fine on my computer. So test on many browsers.  Take into account client resourcesHarm the Farm -  You need to code wisely and negatively test.  Don’t be the cause of a DoS attack that’s really JQuery asking for a resource over and over and over again.  So code wisely. Do negative testing. Monitor Server Resources.They also did a demo where JQuery did an endless loop to pull data from a list. It’s a poor decision but also an easy mistake.  They spiked their server resources within a couple seconds and had to shut down the call before it brought it down.ConclusionJQuery is now another tool in your tool kit. You don’t have to use it. Use it where it makes sense and where it helps you get your job done.Don’t abuse it, you will pay for it laterIt will add to page bloat so take that into accountIt can slow your performance

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • RuntimeBinderException with dynamic in C# 4.0

    - by Terence Lewis
    I have an interface: public abstract class Authorizer<T> where T : RequiresAuthorization { public AuthorizationStatus Authorize(T record) { // Perform authorization specific stuff // and then hand off to an abstract method to handle T-specific stuff // that should happen when authorization is successful } } Then, I have a bunch of different classes which all implement RequiresAuthorization, and correspondingly, an Authorizer<T> for each of them (each business object in my domain requires different logic to execute once the record has been authorized). I'm also using a UnityContainer, in which I register various Authorizer<T>'s. I then have some code as follows to find the right record out of the database and authorize it: void Authorize(RequiresAuthorization item) { var dbItem = ChildContainer.Resolve<IAuthorizationRepository>() .RetrieveRequiresAuthorizationById(item.Id); var authorizerType = type.GetType(String.Format("Foo.Authorizer`1[[{0}]], Foo", dbItem.GetType().AssemblyQualifiedName)); dynamic authorizer = ChildContainer.Resolve(type) as dynamic; authorizer.Authorize(dbItem); } Basically, I'm using the Id on the object to retrieve it out of the database. In the background NHibernate takes care of figuring out what type of RequiresAuthorization it is. I then want to find the right Authorizer for it (I don't know at compile time what implementation of Authorizer<T> I need, so I've got a little bit of reflection to get the fully qualified type). To accomplish this, I use the non-generic overload of UnityContainer's Resolve method to look up the correct authorizer from configuration. Finally, I want to call Authorize on the authorizer, passing through the object I've gotten back from NHibernate. Now, for the problem: In Beta2 of VS2010 the above code works perfectly. On RC and RTM, as soon as I make the Authorize() call, I get a RuntimeBinderException saying "The best overloaded method match for 'Foo.Authorizer<Bar>.Authorize(Bar)' has some invalid arguments". When I inspect the authorizer in the debugger, it's the correct type. When I call GetType().GetMethods() on it, I can see the Authorize method which takes a Bar. If I do GetType() on dbItem it is a Bar. Because this worked in Beta2 and not in RC, I assumed it was a regression (it seems like it should work) and I delayed sorting it out until after I'd had a chance to test it on the RTM version of C# 4.0. Now I've done that and the problem still persists. Does anybody have any suggestions to make this work? Thanks Terence

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • Hide public method used to help test a .NET assembly

    - by ChrisW
    I have a .NET assembly, to be released. Its release build includes: A public, documented API of methods which people are supposed to use A public but undocumented API of other methods, which exist only in order to help test the assembly, and which people are not supposed to use The assembly to be released is a custom control, not an application. To regression-test it, I run it in a testing framework/application, which uses (in addition to the public/documented API) some advanced/undocumented methods which are exported from the control. For the public methods which I don't want people to use, I excluded them from the documentation using the <exclude> tag (supported by the Sandcastle Help File Builder), and the [EditorBrowsable] attribute, for example like this: /// <summary> /// Gets a <see cref="IEditorTransaction"/> instance, which helps /// to combine several DOM edits into a single transaction, which /// can be undone and redone as if they were a single, atomic operation. /// </summary> /// <returns>A <see cref="IEditorTransaction"/> instance.</returns> IEditorTransaction createEditorTransaction(); /// <exclude/> [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); This successfully removes the method from the API documentation, and from Intellisense. However, if in a sample application program I right-click on an instance of the interface to see its definition in the metadata, I can still see the method, and the [EditorBrowsable] attribute as well, for example: // Summary: // Gets a ModelText.ModelDom.Nodes.IEditorTransaction instance, which helps // to combine several DOM edits into a single transaction, which can be undone // and redone as if they were a single, atomic operation. // // Returns: // A ModelText.ModelDom.Nodes.IEditorTransaction instance. IEditorTransaction createEditorTransaction(); // [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); Questions: Is there a way to hide a public method, even from the meta data? If not then instead, for this scenario, would you recommend making the methods internal and using the InternalsVisibleTo attribute? Or would you recommend some other way, and if so what and why? Thank you.

    Read the article

< Previous Page | 5 6 7 8 9 10 11  | Next Page >