Search Results

Search found 10318 results on 413 pages for 'feature detection'.

Page 354/413 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Problem writing a snippet containing Emacs Lisp code

    - by user388346
    Hi all, I've been trying to make use of a cool feature of YASnippet: write snippets containing embedded Emacs Lisp code. There is a snippet for rst-mode that surrounds the entered text with "=" that is as long as the text such as in ==== Text ==== Based on this snippet, I decided to slightly modify it (with Elisp) so that it comments out these three lines depending on the major mode you are in (I thought that such a snippet would be useful to organize the source code). So I wrote this: ${1:`(insert comment-start)`} ${2:$(make-string (string-width text) ?\-)} $1 ${2:Text} $1 ${2:$(make-string (string-width text) ?\-)} $0 This code works relatively well except for one problem: the indentation of these three lines gets mixed up, depending on the major mode I'm in (e.g., in emacs-lisp-mode, the second and the third lines move more to the right than the first line). I think the source of the problem might have something to do with what comes after the string ${1: on the first line. If I add a character, I have no problem (i.e., all three lines are correctly aligned at the end of the snippet expansion). If I add a single space after this string, the misalignment problem still continues though. So my question is: do you know of any way of rewriting this snippet so that this misalignment does not arise? Do you know what's the source of this behaviour? Cheers,

    Read the article

  • Python unittest (using SQLAlchemy) does not write/update database?

    - by Jerry
    Hi, I am puzzled at why my Python unittest runs perfectly fine without actually updating the database. I can even see the SQL statements from SQLAlchemy and step through the newly created user object's email -- ...INFO sqlalchemy.engine.base.Engine.0x...954c INSERT INTO users (user_id, user_name, email, ...) VALUES (%(user_id)s, %(user_name)s, %(email)s, ...) ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_id': u'4cfdafe3f46544e1b4ad0c7fccdbe24a', 'email': u'[email protected]', ...} > .../tests/unit_tests/test_signup.py(127)test_signup_success() -> user = user_q.filter_by(user_name='test').first() (Pdb) n ...INFO sqlalchemy.engine.base.Engine.0x...954c SELECT users.user_id AS users_user_id, ... FROM users WHERE users.user_name = %(user_name_1)s LIMIT 1 OFFSET 0 ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_name_1': 'test'} > .../tests/unit_tests/test_signup.py(128)test_signup_success() -> self.assertTrue(isinstance(user, model.User)) (Pdb) user <pweb.models.User object at 0x9c95b0c> (Pdb) user.email u'[email protected]' Yet at the same time when I login to the test database, I do not see the new record there. Is it some feature from Python/unittest/SQLAlchemy/Pyramid/PostgreSQL that I'm totally unaware of? Thanks. Jerry

    Read the article

  • Linq to SQL code generator features

    - by Anders Abel
    I'm very fond of Linq to SQL and the programming model it encourages. I think that in many cases when you are in control of both the database schema and the code it is not worth the effort to have different relational and object models for the data. Working with Linq to SQL makes it simple to have type safe data access from .NET, using the partial extension methods to implement business rules. Unfortunately I do not like the dbml designer due to the lack of a schema refresh function. So far I have used SqlMetal, but that lacks the customization options of the dbml designer. Because of that I've started working on a tool which regenerates the whole code file like SqlMetal, but has the ability to do the customizations that are available in the dbml designer (and maybe more in the future). The customizations will be described in an xml file which only contains those parts that shouldn't have default values. This should keep the xml file size down as well as the maintenance burden of it. To help me focus on the right features, I would like to know: What would be your favourite feature in a linq to sql code generator?

    Read the article

  • Favourite Features of VS 2010

    - by Noldorin
    With the general public release of Visual Studio 2010 Beta 2 today, this latest version has created a lot of hype and interest. Indeed, the opinion I've gauged is that VS 2010 has resolved a great deal of the minor flaws left over from previous versions, as well as added some particularly useful new code editor and project development tools (in particular the Premium/Ultimate versions). My question here is: what are you favourite new features in VS 2010 that have really got you excited? Or similarly, what are the flaws of VS 2008 that you are most glad to have resolved? There is a wealth of changes in VS 2010, of course, but these are some of the ones that have interested me most (about which I know!). Integrated support for F# (with multi-targeting for .NET 2.0 - 4.0)/ Much improved WPF designer. The VS 2008 was more than a bit buggy at times. Great improvements to the code editor, such as call hierarchy viewing. A decent add-in framework. A greatly expanded testing framework (now capable of database testing, for example) in Premium/Ultimate. Project planning and modelling features in Premium/Ultimate. If I could request one point/feature per post, I think that would be best, so we could vote them individually.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • IDE framework for a dynamic language?

    - by Kevin Reid
    Let's say I have a super-wonderful new programming language, and I want there to be an IDE for it. What IDE platform/framework could I use to get this done efficiently? I mean things like: Collection of files in a project, searching them, tabbed/split editors etc. — the basics. Syntax highlighting and auto-indent/reformatting. Providing the user interface for code completion — hit tab, get a list (I'll have to implement the necessary partial evaluation myself (it's a dynamic language)). This is the feature I'm most wishing for. Built-in parser framework which is good at recovering from the sort of syntax errors occurring in code that is in the middle of being edited would be helpful. In-editor annotation of syntax/runtime error locations fed back from the language runtime. REPL (interactive evaluator) interaction with the same completion as in the editor. This system should be Linux/Mac/Windows cross-platform (in that priority order). Being implemented in Java (or rather, accepting language plugins written in Java) is possibly useful, but anything else is worth a try too.

    Read the article

  • Custom Solr sorting

    - by Tom
    Hello everyone, I've been asked to do an evaluation of Solr as an alternative for a commercial search engine. The application now has a very particular way of sorting results using something called "buckets". I'll try to explain with a bit of details: In the interface they have 2 fields: "what" and "where". Both fields are actually sets of fields (what = category, name, contact info... and where= country, state, region, city...) so the copyfield feature of Solr immediately comes to mind. Now based on the field generated the actual match the result should end up in a specific bucket. In particular the first bucket contains all the result documents that have an exact match on the category field, in the second bucket all exact matches on name, the third partial matches on category, the fourth partial matches on name, the fifth matches on contact info etc... Then within each of those first tier buckets all results are placed in second tier buckets depending on what location was matched: city, then region, then province and so on. To even complicate things more there is also a third tier bucket where results are placed according to the value of a ranking field: all documents with the value 1 in the ranking field go in bucket 1 and so on. And finally results should be randomized in the third tier bucket... On top of this they obviously want support for facets and paging. My apologies for the long mail but I would greatly appreciate feedback and/or suggestions. I'm aware that this that this is a very particular problem but everything that points me in the right direction is helpful. Cheers, Tom

    Read the article

  • PHP: passing a function with parameters as parameter

    - by Oden
    Hey, I'm not sure that silly question, but I ask: So, if there is an anonymous function I can give it as another anonymous functions parameter, if it has been already stored a variable. But, whats in that case, if I have stored only one function in a variable, and add the second directly as a parameter into it? Can I add parameters to the non-stored function? Fist example (thats what i understand :) ): $func = function($str){ return $str; }; $func2 = function($str){ return $str; }; $var = $func($func2('asd')); var_dump($var); // prints out string(3) "asd" That makes sense for me, but what is with the following one? $func = function($str){ return $str; }; $var = $func(function($str = "asd"){ return $str; }); var_dump($var); /** This prints out: object(Closure)#1 (1) { ["parameter"]=> array(1) { ["$str"]=> string(10) "" } } But why? */ And at the end, can someone recommend me a book or an article, from what i can learn this lambda coding feature of php? Thank you in advance for your answers :)

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • Client-side validation breaks in IE because of PropertyProxyValidator and ScriptManager cooperation.

    - by Eugene
    The specific of the project is in using Enterpise Library for Server side validation and jQuery for client-side validation. So I have the next simple form for example: <asp:Content ID="_mainContent" ContentPlaceHolderID="MainContent" runat="server"> <script src="../../../Scripts/jquery-1.3.2.js" type="text/javascript"></script> <script src="../../../Scripts/jquery.validate.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $("#aspnetForm").validate({ rules: { "<%= _txtProjectName.UniqueID %>": { required: true } } }); }); </script> <asp:TextBox ID="_txtProjectName" runat="server" CssClass="textBoxWithValidator_long" /> <entlib:PropertyProxyValidator id="_validatorProjectName" runat="server" ControlToValidate="_txtProjectName" PropertyName="ProjectName" SourceTypeName="LabManagement.Project.Project" /> <asp:Button CssClass="cell_InlineElement" ID="_btnSave" runat="server" Text="Save" onclick="_btnSave_Click" Width="50px" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true"> </asp:ScriptManager> </asp:Content> The problem is in the next: client-side validation worked correctly before I needed to implement some AJAX.NET feature. So I have to add to the page ScriptManager (the last two lines in the code). But after that the next situation appeared: In InternetExplorer((7) - only in IE !!! - in Firefox everything works correctly) after clicking save button, if left the textbox ProjectName empty the client-side jquery validation appears but (!) the page submits to the server anyway. Some notes: If delete PropertyProxyValidator from the page - the client-side validation works correctly in IE but I need it for specific of the project. It seems that the problem is in the function WebForm_OnSubmit() that is inserted to the form after PropertyProxyValidator adding. ( ... <form name="aspnetForm" method="post" action="Project.aspx?TransType=NewProject" onsubmit="javascript:return WebForm_OnSubmit();" ...>) Could anyone help, please.

    Read the article

  • Why thread in background is not waiting for task to complete?

    - by Haris Hasan
    I am playing with async await feature of C#. Things work as expected when I use it with UI thread. But when I use it in a non-UI thread it doesn't work as expected. Consider the code below private void Click_Button(object sender, RoutedEventArgs e) { var bg = new BackgroundWorker(); bg.DoWork += BgDoWork; bg.RunWorkerCompleted += BgOnRunWorkerCompleted; bg.RunWorkerAsync(); } private void BgOnRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs runWorkerCompletedEventArgs) { } private async void BgDoWork(object sender, DoWorkEventArgs doWorkEventArgs) { await Method(); } private static async Task Method() { for (int i = int.MinValue; i < int.MaxValue; i++) { var http = new HttpClient(); var tsk = await http.GetAsync("http://www.ebay.com"); } } When I execute this code, background thread don't wait for long running task in Method to complete. Instead it instantly executes the BgOnRunWorkerCompleted after calling Method. Why is that so? What am I missing here? P.S: I am not interested in alternate ways or correct ways of doing this. I want to know what is actually happening behind the scene in this case? Why is it not waiting?

    Read the article

  • C89, Mixing Variable Declarations and Code

    - by rutski
    I'm very curious to know why exactly C89 compilers will dump on you when you try to mix variable declarations and code, like this for example: rutski@imac:~$ cat test.c #include <stdio.h> int main(void) { printf("Hello World!\n"); int x = 7; printf("%d!\n", x); return 0; } rutski@imac:~$ gcc -std=c89 -pedantic test.c test.c: In function ‘main’: test.c:7: warning: ISO C90 forbids mixed declarations and code rutski@imac:~$ Yes, you can avoid this sort of thing by staying away from -pedantic. But then your code is no longer standards compliant. And as anybody capable of answering this post probably already knows, this is not just a theoretical concern. Platforms like Microsoft's C compiler enforce this quick in the standard under any and all circumstances. Given how ancient C is, I would imagine that this feature is due to some historical issue dating back to the extraordinary hardware limitations of the 70's, but I don't know the details. Or am I totally wrong there?

    Read the article

  • Dynamic loading of shared objects using dlopen()

    - by Andy
    Hi, I'm working on a plain X11 app. By default, my app only requires libX11.so and the standard gcc C and math libs. My app has also support for extensions like Xfixes and Xrender and the ALSA sound system. But this feature shall be made optional, i.e. if Xfixes/Xrender/ALSA is installed on the host system, my app will offer extended functionality. If Xfixes or Xrender or ALSA is not there, my app will still run but some functionality will not be available. To achieve this behaviour, I'm not linking dynamically against -lXfixes, -lXrender and -lasound. Instead, I'm opening these libraries manually using dlopen(). By doing it this way, I can be sure that my app won't fail in case one of these optional components is not present. Now to my question: What library names should I use when calling dlopen()? I've seen that these differ from distro to distro. For example, on openSUSE 11, they're named the following: libXfixes.so libXrender.so libasound.so On Ubuntu, however, the names have a version number attached, like this: libXfixes.so.3 libXrender.so.1 libasound.so.2 So trying to open "libXfixes.so" would fail on Ubuntu, although the lib is obviously there. It just has a version number attached. So how should my app handle this? Should I let my app scan /usr/lib/ first manually to see which libs we have and then choose an appropriate one? Or does anyone have a better idea? Thanks guys, Andy

    Read the article

  • Copying a foreign Subversion repository to keep under dependencies

    - by Jonathan Sternberg
    I want to keep dependencies for my project in our own repository, that way we have consistent libraries for the entire team to work with. For example, I want our project to use the Boost libraries. I've seen this done in the past with putting dependencies under a "vendor" or "dependencies" folder. But I still want to be able to update these dependencies. If a new feature appears in a library and we need it, I want to just be able to update that repository within our own repository. I don't want to have to recopy it and put it under version control again. I'd also like for us to have the ability to change dependencies if a small change is needed without stopping us from ever updating the library. I want the ability to do something like 'svn cp', then be able to 'svn merge' in the future. I just tried this with the boost trunk, but I'm not able to get any history using 'svn log' on the copy I made. How do I do this? What is usually done for large projects with dependencies?

    Read the article

  • Why is partial specialziation of a nested class template allowed, while complete isn't?

    - by drhirsch
    template<int x> struct A { template<int y> struct B {};. template<int y, int unused> struct C {}; }; template<int x> template<> struct A<x>::B<x> {}; // error: enclosing class templates are not explicitly specialized template<int x> template<int unused> struct A<x>::C<x, unused> {}; // ok So why is the explicit specialization of a inner, nested class (or function) not allowed, if the outer class isn't specialiced too? Strange enough, I can work around this behaviour if I only partially specialize the inner class with simply adding a dummy template parameter. Makes things uglier and more complex, but it works. Note: I need this feature for recursive templates of the inner class for a set of the outer class. To make things even more complicate, in reality I only need a template function instead of the inner class. But partial specialization of functions is generally disallowed somewhere else in the standard ^^

    Read the article

  • Is there a practical benefit to casting a NULL pointer to an object and calling one of its member fu

    - by zdawg
    Ok, so I know that technically this is undefined behavior, but nonetheless, I've seen this more than once in production code. And please correct me if I'm wrong, but I've also heard that some people use this "feature" as a somewhat legitimate substitute of a lacking aspect of the current C++ standard, namely, the inability to obtain the address (well, offset really) of a member function. For example, this is out of a popular implementation of a PCRE (Perl-compatible Regular Expression) library: #ifndef offsetof #define offsetof(p_type,field) ((size_t)&(((p_type *)0)->field)) #endif One can debate whether the exploitation of such a language subtlety in a case like this is valid or not, or even necessary, but I've also seen it used like this: struct Result { void stat() { if(this) // do something... else // do something else... } }; // ...somewhere else in the code... ((Result*)0)->stat(); This works just fine! It avoids a null pointer dereference by testing for the existence of this, and it does not try to access class members in the else block. So long as these guards are in place, it's legitimate code, right? So the question remains: Is there a practical use case, where one would benefit from using such a construct? I'm especially concerned about the second case, since the first case is more of a workaround for a language limitation. Or is it? PS. Sorry about the C-style casts, unfortunately people still prefer to type less if they can.

    Read the article

  • key-words highlight in <textarea> (again)

    - by Halst
    Wait, I know! I know that this "syntax highlight in textarea"-question was raised like a million times on stackoverflow! But, please, listen. offtopic: I'm not a web-developer, and technically I'm not a programmer at all. I study mechatronics and deal mostly with control-engineering and digital-hardware. And I'm so pissed off that whenever I want to share some application (that would be helpful in my field) and embed it into the web, I need to know such a crazy amount of technologies, like html, css, javascript, flash, etc.. that takes time, which I could have been spending for the benefit of my own field. Right now I'm playing with hardware-description-languages and I'm writing some Python-libraries to convert one HDL into another. And I wanted to embed such feature on the web: http://xhdl2vhdl.appspot.com/ I wanted to implement some basic syntax highlighting (only keywords highlighting will be enough) so that the code could be readable. But the whole idea highlighting something in textarea is not trivial at all. The other difficulty is that the languages I work with are rare, and there are no out-of-box solutions for them. I tried to dig into these solutions, but they are very complicated for me: http://www.nicolarizzo.com/gamesroom/experimental/CodeEditor.html http://marijn.haverbeke.nl/codemirror/jstest.html and there is no clear descriptions how to use them (for my level of knowledge of web-development). So, is there a simple solution, just to highlight a bunch of key-words in textarea or perform something equivalent? Thank you.

    Read the article

  • PHP File Upload, using the FILES superglobal array

    - by jnkrois
    Hello everybody, I just have a quick question. Let's say that I'm setting up an upload feature for a project. I'm using PHP5, and I was wondering if by using the $_FILES superglobal array, I could have access to the "tmp_name" array key , which is the temporary path and name of the file being uploaded to display a preview of the image, before moving it to the "uploads" folder in the server. What I had in mind is something like this: Use jQuery to detect when the "file" input filed changes (when the user selects an image file), then place an ajax call to upload the the file, which will move it to the temp folder even before I use "move_uploaded_file". If I can somehow access the "tmp_name" array key, I could probably put it into and "img" tag to display a preview. Later move the file to it's final location when the submit button is clicked. I'm not asking for any code examples or anything, since I'd be thrilled to accomplish this on my own, I just want to find out if there is access to that array key in any way. Thanks in advance.

    Read the article

  • Vim: yank and replace -- the same yanked input -- multiple times, and two other questions

    - by Hassan Syed
    Now that I am using vim for everything I type, rather then just for configuring servers, I wan't to sort out the following trivialities. I tried to formulate Google search queries but the results didn't address my questions :D. Question one: How do I yank and replace multiple times ? Once I have something in the yank history (if that is what its called) and then highlight and use the 'p' char in command mode the replaced text is put at the front of the yank history; therefore subsequent replace operations do not use the the text I intended. I imagine this to be a usefull feature under certain circumstances but I do not have a need for it in my workflow. Question two: How do I type text without causing the line to ripple forward ? I use hard-tab stops to allign my code in a certain way -- e.g., FunctionNameX ( lala * land ); FunctionNameProto ( ); When I figure out what needs to go into the second function, how do I insert it without move the text up ? Question three Is there a way of having a uniform yank history across gvim instances on the same machine ? I have 1 monitors. Just wondering, atm I am using highlight + mouse middle click.

    Read the article

  • How to test method call order with Moq

    - by Finglas
    At the moment I have: [Test] public void DrawDrawsAllScreensInTheReverseOrderOfTheStack() { // Arrange. var screenMockOne = new Mock<IScreen>(); var screenMockTwo = new Mock<IScreen>(); var screens = new List<IScreen>(); screens.Add(screenMockOne.Object); screens.Add(screenMockTwo.Object); var stackOfScreensMock = new Mock<IScreenStack>(); stackOfScreensMock.Setup(s => s.ToArray()).Returns(screens.ToArray()); var screenManager = new ScreenManager(stackOfScreensMock.Object); // Act. screenManager.Draw(new Mock<GameTime>().Object); // Assert. screenMockOne.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock one"); screenMockTwo.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock two"); } But the order in which I draw my objects in the production code does not matter. I could do one first, or two it doesn't matter. However it should matter as the draw order is important. How do you (using Moq) ensure methods are called in a certain order? Edit I got rid of that test. The draw method has been removed from my unit tests. I'll just have to manually test it works. The reversing of the order though was taken into a seperate test class where it was tested so it's not all bad. Thanks for the link about the feature they are looking into. I sure hope it gets added soon, very handy.

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • How to hide the console of batch scripts without losing std err/out streams

    - by cooper.thompson
    My question is similar to Running a CMD or BAT in silent mode, but with one additional constraint. If you use WshScript.Run in vbscript, you lose access to the standard in/error/out streams of the process. WshScript.Exec gives you access to the standard streams, but you can't hide your windows. How can you have your cake (hide the windows) and eat it too (have direct access to the console streams)? I'm currently thinking about a C++ executable which creates a new Windows Station and Desktop, (see MSDN) and runs a specified script within that new Desktop (I'm not yet an expert on Window Stations and Desktops, so this idea may be retarded). This idea is based loosely on Condor's USE_VISIBLE_DESKTOP feature, which, if disabled, runs Condor jobs in a non-visible Desktop. I haven't quite figured out if this requires elevated priveledge. The tradeoff of this approach is that your script can disappear into limbo if it blocks on user input. Does anyone have any additional ideas? Or feedback on the approach outlined above? Edit: Also, the purpose of our script is to set up the user environment, so running as another user, or as a system scheduled task isn't really an option (unless there are clever tricks I don't know about).

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >