Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 688/1012 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • AS3 Pass FlashVars to loaded swf

    - by Robin
    Hi I have a A.swf which loads B.swf onto a movieclip and needs to pass it some FlashVars. When loading B.swf with html, I can pass FlashVars fine. When passing from A.swf, it gets a Error #2044: Unhandled ioError:. text=Error #2032: Stream Error. URL: file: The code in A.swf is var request:URLRequest = new URLRequest ("B.swf"); var variables : URLVariables = new URLVariables(); variables.xml = "test.xml"; // This line causes the error 2044, else B.swf loads fine with FlashVars request.data = variables; loader.load (request); In B.swf, it is checking the Flashvars like so. It works fine from html side this.loaderInfo.parameters.xml

    Read the article

  • Text Decoding Problem

    - by Jason Miesionczek
    So given this input string: =?ISO-8859-1?Q?TEST=2C_This_Is_A_Test_of_Some_Encoding=AE?= And this function: private string DecodeSubject(string input) { StringBuilder sb = new StringBuilder(); MatchCollection matches = Regex.Matches(inputText.Text, @"=\?(?<encoding>[\S]+)\?.\?(?<data>[\S]+[=]*)\?="); foreach (Match m in matches) { string encoding = m.Groups["encoding"].Value; string data = m.Groups["data"].Value; Encoding enc = Encoding.GetEncoding(encoding.ToLower()); if (enc == Encoding.UTF8) { byte[] d = Convert.FromBase64String(data); sb.Append(Encoding.ASCII.GetString(d)); } else { byte[] bytes = Encoding.Default.GetBytes(data); string decoded = enc.GetString(bytes); sb.Append(decoded); } } return sb.ToString(); } The result is the same as the data extracted from the input string. What am i doing wrong that this text is not getting decoded properly?

    Read the article

  • Prototype Library use of !! operator

    - by Rajat
    Here is a snippet from Prototype Javascript Library : Browser: (function(){ var ua = navigator.userAgent; var isOpera = Object.prototype.toString.call(window.opera) == '[object Opera]'; return { IE: !!window.attachEvent && !isOpera, Opera: isOpera, WebKit: ua.indexOf('AppleWebKit/') > -1, Gecko: ua.indexOf('Gecko') > -1 && ua.indexOf('KHTML') === -1, MobileSafari: /Apple.*Mobile/.test(ua) } })(), This is all good and i understand the objective of creating a browser object. One thing that caught my eye and I haven't been able to figure out is the use of double not operator !! in the IE property. If you read through the code you will find it at many other places. I dont understand whats the difference between !!window.attachEvent and using just window.attachEvent. Is it just a convention or is there more to it that's not obvious?

    Read the article

  • XML Output is Truncated in SQL

    - by Muhammad Akhtar
    Hi, I need to return my result set in XML and this works fine, but if the number of records are increased, my xml output is truncated here is my query select t.id,t.name,t.address from test FOR XML AUTO, ROOT('Response'), ELEMENTS However I have set some option to increase the output result set like.. Tools --> Options --> Query Results --> SQL Server --> Results to Text --> Maximum number of characters displayed in each column Tools --> Options --> Results --> Maximum characters per column but still I am unable to get my desired result. please suggest my solution Thanks....

    Read the article

  • Choosing http status code for unknown command reply

    - by w0rldart
    So, I'm writing a small test that I have been required to complete and I just want to give it some final touches by adding some header status code responses and some other stuff. Right now, my dilemma is what HTTP status code to choose for my "Unknown command" response after the $_GET['cmd'] has been compared to the existing commands list. case 404: $text = 'Not Found'; break; case 405: $text = 'Method Not Allowed'; break; case 406: $text = 'Not Acceptable'; break; For which one of the above should I go? And if none, which other?

    Read the article

  • XML + Xslt -> Xml with PHP

    - by rokdd
    Hi, I know that there are really a mass of XML XSLT php merging threads at SO. But php specific i could not found what might my problem: $xml = new DOMDocument; $xml-load("f.xml"); $xsl = new DOMDocument; $xsl-load('test.xsl'); // init and configure processor $proc = new XSLTProcessor; $proc-importStyleSheet($xsl); // import xsl document $xml2=$proc-transformToXML($xml); echo $xml2; My xslt file looks a bit empty.. However i tried ´output method="xml"´. but it doesnot help.. PHP returns always the data as text or html but not in XML.. what i am doing wrong. I only want to edit the XML with xslt and save back to XML (file). THanks for your help!

    Read the article

  • Hot deploy not longer working on JBoss

    - by Bernhard V
    Hi! I've got a pretty annoying problem with my JBoss AS 4.2.3 GA. Until recently everything was running fine, but now the hot deploy feature is now longer working. And -- as always -- I don't know what I did to cause this behaviour. My projects are built with Maven. I've cleaned every target directory, installed the projects and then deployed them to the server. So the sources in Eclipse and the deployed projects on the server should be identical. Inside a method I've added a simple System.out.println("test"); statement and -- BANG! -- I get the following error: Do you know a way out of my trouble?

    Read the article

  • Flash AS3 - Display an error if the XML if incorrect

    - by ongoingworlds
    Hi, I'm creating a flash application which loads in some XML which is generated dynamically from the CMS. I want to display an error in case the XML file isn't formatted correctly. When I test this with incorrectly formatted XML, it will just get to the line myXML = XML(myLoader.data); and then just bomb out. How can I catch the error, display a message to the user, but the flash program to continue as normal. var myXMLURL:URLRequest = new URLRequest(XMLfile); var myLoader:URLLoader = new URLLoader(myXMLURL); myLoader.addEventListener(Event.COMPLETE, xmlLoaded); myLoader.addEventListener(IOErrorEvent.IO_ERROR, xmlFailed); var myXML:XML; //--when the xml is loaded, do this function xmlLoaded(e:Event):void { myXML = XML(myLoader.data); trace("XML = "+myXML); } //--if the xml fails to load, do this function xmlFailed(event:IOErrorEvent):void { errorMsg.text = "The XML file cannot be found" }

    Read the article

  • How to convert culture specific double using TypeConverter?

    - by Christian
    Hi I have a problem with the TypeConverter class. It works fine with CultureInvariant values but cannot convert specific cultures like english 1000 seperators. Below is a small test program that I cannot get to work. using System; using System.Globalization; using System.ComponentModel; namespace TestConvertCulture { class Program { static void Main() { try { var culture = new CultureInfo( "en" ); TypeConverter typeConverter = TypeDescriptor.GetConverter( typeof ( double ) ); double value = (double)typeConverter.ConvertFromString( null, culture, "2,999.95" ); Console.WriteLine( "Value: " + value ); } catch( Exception e ) { Console.WriteLine( "Error: " + e.Message ); } } } }

    Read the article

  • .NET: Preserving some, but not all query params during redirect

    - by kasper pedersen
    Hi all, Could someone tell me if the code below would achieve what I want, which is: Check if the query parameters 'return_path' and/or 'user_state' are present in the query string, and if so append them to the query string of the redirect URI. As I'm not a .NET dev and don't have a server to test this on, I was hoping someone could give me some feedback. ArrayList vars = new ArrayList(); vars.Add("return_path"); vars.Add("user_state"); string newUrl = "/new/request/uri" + "?"; ArrayList params = new ArrayList(); foreach ( string key in Request.QueryString ) { if (vars.contains(key)) { params.Add(key + "=" + HttpUtility.URLPathEncode(Request.QueryString[key])); } } String[] paramArr = (String[]) params.ToArray( typeof (string) ); String queryString = String.join("&", paramArr); Response.Redirect(newUrl); Thank you :)

    Read the article

  • howto hide outline on a form

    - by justjoe
    i have to design a form with an input inside it. i use background image on the input so it would look like a button. so every time somebody click it, then it would send $POST, a behavior i want to achieve. But the problem is about the outline around the form. The outline show when we click the form. It's minor, but it would be great to make the form (or input) lost it outline. i test it using Firefox 3.6 and flock. Both of them show the outline behavior that i want to avoid

    Read the article

  • Display Img and Div inline - it's not rendered inline

    - by user359372
    In order to follow correct web standards, I've tried to layout image and div inline. In orde to achieve that - I've used display:inline property. But then I experienced the following issue: image renders from the center line, and div doesn't respect height parameter set to it. I've tried using line-height parameter, but that didn't give any useful results. I've also tried various combinations with setting margin/padding to some values or to auto, or replacing div with span, or wrapping img and div with additional divs. I've managed to achieve desired result by using position:absolute, but that doesn't help in cases where I want to use centered/relative positioning of the whole component... Any clues or ideas or troubleshooting hints? Please find the html example below: Test Page Some text that should be displayed in the center/middle of the div 123   Some text that should be displayed in the center/middle of the div

    Read the article

  • ASP.Net MVC 404 errors when route contains an .svc extension

    - by Kragen
    I have an ASP.Net MVC 2 site set up under IIS7 using the integrated pipeline with the following route: routes.MapRoute( "MyRoute", "mycontroller/{name}/{*path}", new { controller = "MyController", action = "Index", path = UrlParameter.Optional } ); There are no other routes above this route, but whenever I try and access the above route with a path value that has an .svc extension, for example: http://localhost/MyVirtualDirectory/mycontroller/test/somepath.svc ASP.Net returns a 404 error without executing my controller (I have a log message call at the start of the action method). If I change the extension to something benign (like .txt) it works perfectly, so seems that somewhere along the line ASP.Net is interpreting the request as a standard ASP.Net call to a web service that doesn't exist - this is definitely an ASP.Net 404 response (not an IIS response). What could be causing this, and how do I stop it from happening?

    Read the article

  • Execute linux AT Command via PHP

    - by ahmad Rabie
    When I run this code via ssh echo wget http://domain.com/send_me_email.php | at 12:54 it run correctly and send me an email at that time. but if I run a php Like this exec("echo wget http://domain.com/send_me_email.php | at 12:54"); exec("atq",$arr); print_r($arr); result of that code is something like this : job 63 at 2011-11-27 12:54 ,As you can see the job created successfully but I don't receive any Email at that time?! I test this line in php exec("wget http://domain.com/send_me_email.php"); and it send me an email, it means that I have permission to run exec and wget via php.but what is problem? I cant understand what is my problem. Please help me. thanks

    Read the article

  • DBLinq not generating where clause

    - by sipwiz
    I'm testing out DBLinq-0.18 and DBLinq from SVN Trunk with MySQL and Postgresql. I'm only using a very simple query but on both database DBLinq is not generating a Where clause. I have confirmed this by turning on statement logging on Postgresql to check exactly what request DBLinq is sending. My Linq query is: MyDB db = new MyDB(new NpgsqlConnection("Database=database;Host=localhost;User Id=postgres;Password=password")); var customers = from customer in db.Customers where customer.CustomerUserName == "test" select customer; The query works ok but the SQL generated by DBLinq is of the form: select customerusername, customerpassword .... from public.customers There is no Where clause which means DBLinq must be pulling the whole table down before running the Linq query. Has anyone had any experience with DBLinq and know what I could be doing wrong?

    Read the article

  • itunes sdk list albums

    - by Matt Facer
    Hi guys, I'm working on a new test app (just out of curiosity really) which is an add on to iTunes. I'm trying fairly basic things at the mo, and have managed to control volume, pause etc etc. I have a function from some demo code which loops through all the tracks in my main library and gets their album name... I then show the individual album name in my listbox. This is FAR from the best way to do it! Is there a way to query the library to get just the album names? I ultimately want to get to a point where I can have a list of albums (with images) - I click on the album name and that loads the associated tracks.... thanks for any help! (I'm using VB.net btw)

    Read the article

  • gcov and switch statements

    - by Matt
    I'm running gcov over some C code with a switch statement. I've written test cases to cover every possible path through that switch statement, but it still reports a branch in the switch statement as not taken and less than 100% on the "Taken at least once" stat. Here's some sample code to demonstrate: #include "stdio.h" void foo(int i) { switch(i) { case 1:printf("a\n");break; case 2:printf("b\n");break; case 3:printf("c\n");break; default: printf("other\n"); } } int main() { int i; for(i=0;i<4;++i) foo(i); return 0; } I built with "gcc temp.c -fprofile-arcs -ftest-coverage", ran "a", then did "gcov -b -c temp.c". The output indicates eight branches on the switch and one (branch 6) not taken. What are all those branches and how do I get 100% coverage?

    Read the article

  • Is there a reason why a submit button would fail in ie6 using jquery?

    - by kgrad
    I have a form being submit to a servlet, there is a jQuery Datepicker on the page. On some computers using ie6, for some reason the page is crashing on submit. I am getting a 404 error, but only sometimes. I have no idea why this would occur. My theory is that somehow it's bypassing the servlet, or its not loading jquery properly, or... I am at a loss. What are some reasons why this could occur on some computers? The exact version of ie6 is 6.0.2900.2180, on windows XP SP2. How can I test with this specific version?

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Finding coordinates of a point between two points?

    - by Nicros
    Doing some 3D stuff in wpf- want to use a simpler test to see if everything is working (before moving to curves). The basic question is given two points x1,y1,z1 and x2,y2,z2 I have calculated the distance between the points. But how to find the coordinates of another point (x3,y3,z3) that lies on that line at some distance? I.e. if my line is 100 long between -50,0,0 and 50,0,0 what are the coordinates of the point at 100 * 0.1 along the line? I think this is a simple formula but I haven't found it yet....

    Read the article

  • After I've installed it, NUnit doesn't appear in VS2008

    - by Richard77
    Hello, I downloaded and installed NUnit successfuly (or so I was told by the installer). Now, when I start a project, I don't see NUnit in the 'Create UnitTest' prompt as one the choice I'm givven to. All I can see are (i) VS Unit Test and (ii) Mb Unit v3 as choices. In fact, I downloaded today the MbB Unit. After, installing it, I was able to see it right away in the prompt. What happen to the NUnit? Althought, I can see it when I do Start - All Programs Or maybe NUnit dosn't need to appear in the prompt as choice since it has its own GUI. Thanks for helping

    Read the article

  • How can I create a MethodInfo from an Action delegate

    - by Michael Meadows
    I am trying to develop an NUnit addin that dynamically adds test methods to a suite from an object that contains a list of Action delegates. The problem is that NUnit appears to be leaning heavily on reflection to get the job done. Consequently, it looks like there's no simple way to add my Actions directly to the suite. I must, instead, add MethodInfo objects. This would normally work, but the Action delegates are anonymous, so I would have to build the types and methods to accomplish this. I need to find an easier way to do this, without resorting to using Emit. Does anyone know how to easily create MethodInfo instances from Action delegates?

    Read the article

  • How this is code is getting compiled even though we are using a constant which is defined later?

    - by GK
    In the following code DEFAULT_CACHE_SIZE is declared later, but it is used to assign a value to String variable before than that, so was curious how is it possible? public class Test { public String getName() { return this.name; } public int getCacheSize() { return this.cacheSize; } public synchronized void setCacheSize(int size) { this.cacheSize = size; System.out.println("Cache size now " + this.cacheSize); } private final String name = "Reginald"; private int cacheSize = DEFAULT_CACHE_SIZE; private static final int DEFAULT_CACHE_SIZE = 200; }

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >