Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 614/780 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • Precompiled headers question

    - by Kotti
    Hello! I am right now reorganizing my project and what recently was a simple application now became a pair of C++ projects - static library and real application. I would like to share one precompiled header between two projects, but face some troubles with setting up the .pdb file paths. Assume my first project is called Library and builds it's .lib file with a corresponding Library.pdb file. Now, the second project is called Application and builds everything into the same folder (.exe and another Application.pdb file). Right now my both projects create their own precompiled headers file (Library.pch and Application.pch) based on one actual header file. It works, but I think it's a waste of time and I also think there should be a way to share one precompiled header between two projects. If in my Application project I try to set the Use Precompiled Header (/Yu) option and set it to Library.pch, it wouldn't work, because of the following error: error C2858: command-line option 'program database name "Application.pdb" inconsistent with precompiled header, which used "Library.pdb". So, does anyone know some trick or way to share one precompiled header between two projects preserving proper debug information?

    Read the article

  • When using Data Annotations with MVC, Pro and Cons of using an interface vs. a MetadataType

    - by SkippyFire
    If you read this article on Validation with the Data Annotation Validators, it shows that you can use the MetadataType attribute to add validation attributes to properties on partial classes. You use this when working with ORMs like LINQ to SQL, Entity Framework, or Subsonic. Then you can use the "automagic" client and server side validation. It plays very nicely with MVC. However, a colleague of mine used an interface to accomplish exactly the same result. it looks almost exactly the same, and functionally accomplishes the same thing. So instead of doing this: [MetadataType(typeof(MovieMetaData))] public partial class Movie { } public class MovieMetaData { [Required] public object Title { get; set; } [Required] [StringLength(5)] public object Director { get; set; } [DisplayName("Date Released")] [Required] public object DateReleased { get; set; } } He did this: public partial class Movie :IMovie { } public interface IMovie { [Required] object Title { get; set; } [Required] [StringLength(5)] object Director { get; set; } [DisplayName("Date Released")] [Required] object DateReleased { get; set; } } So my question is, when does this difference actually matter? My thoughts are that interfaces tend to be more "reusable", and that making one for just a single class doesn't make that much sense. You could also argue that you could design your classes and interfaces in a way that allows you to use interfaces on multiple objects, but I feel like that is trying to fit your models into something else, when they should really stand on their own. What do you think?

    Read the article

  • Build System with Recursive Dependency Aggregation

    - by radman
    Hi, I recently began setting up my own library and projects using a cross platform build system (generates make files, visual studio solutions/projects etc on demand) and I have run into a problem that has likely been solved already. The issue that I have run into is this: When an application has a dependency that also has dependencies then the application being linked must link the dependency and also all of its sub-dependencies. This proceeds in a recursive fashion e.g. (For arguments sake lets assume that we are dealing exclusively with static libraries.) TopLevelApp.exe dependency_A dependency_A-1 dependency_A-2 dependency_B dependency_B-1 dependency_B-2 So in this example TopLevelApp will need to link dependency_A, dependency_A-1, dependency_A-2 etc and the same for B. I think the responsibility of remembering all of these manually in the target application is pretty sub optimal. There is also the issue of ensuring the same version of the dependency is used across all targets (assuming that some targets depend on the same things, e.g. boost). Now linking all of the libraries is required and there is no way of getting around it. What I am looking for is a build system that manages this for you. So all you have to do is specify that you depend on something and the appropriate dependencies of that library will be pulled in automatically. The build system I have been looking at is premake premake4 which doesn't handle this (as far as I can determine). Does anyone know of a build system that does handle this? and if there isn't then why not?

    Read the article

  • Reading from serial port with Boost Asio?

    - by trikri
    Hi! I'm going to check for incoming messages (data packages) on the serial port, using Boost Asio. Each message will start with a header that is one byte long, and will specify which type of the message has been sent. Each different type of message has an own length. The function I'm about to write should check for new incoming messages continually, and when it finds one it should read it, and then some other function should parse it. I thought that the code might look something like this: void check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { s1 = streambuf_to_string(response); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { s2 = streambuf_to_string(response); // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Is boost::asio::streambuf is the right thing to use? And how do I extract the data from it so I can parse the message? I also want to know if I need to have a separate thread which only calls this function, so that it get called more often? Isn't there a risk for loosing data in between two calls to the function otherwise, because so much data comes in that it can't be stored in the serial ports memory? I'm using Qt as a widget toolkit and I don't really know how long time it needs to process all it's events.

    Read the article

  • Are web-safe colors still relevant?

    - by Gavin Miller
    Since the vast majority of monitors are 16-bit color or more, including mobile devices, does it make sense to even consider web-safe colors when choosing color schemes? Or is it something that ought to be relegated to history as a piece of trivia? For those of you that don't know what web-safe colors are: Another set of 216 color values is commonly considered to be the "web-safe" color palette, developed at a time when many computer displays were only capable of displaying 256 colors. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six shades each of red, green, and blue (6 × 6 × 6 = 216). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. [Wikipedia]

    Read the article

  • 3D Web Sites and Applications

    - by Scott Evernden
    I have for the last several years been struggling to understand why the Internet has so few actually useful 3D web applications. It's 2009 and still everything looks like pages from a Sears catalog. You can turn on your TV and find flying logos every night. After that you can get nostalgic and flip on ol' N-64 and play some Zelda or Mario Kart. On the PC, Sims 2 is approaching 6 years old already.. And then there's WoW. Current generation of users - the Facebook crowd, let's say - has ~no~ problem dealing with multi-dimensional environments.. And yet, nothing really immersive seems to happen on the web. I've been hearing about VRML and X3D for at least 10 years and ... pffft .. - nothing earth shaking going on there. Java 3D ? .. cool ! .. but ...... Still .... waiting and waiting. Do you think it will take a killer-web app before people become accustomed-to or will seek to use what could more more engaging web experiences? I am not talking about Second Life and other dedicated downloaded applications. I probably am more focused on apps like Lively or SceneCaster or Hangout or a half dozen others that are delivered 'painlessly' directly into web pages. My own particular interest is in the domain of virtual stores and immersive shopping. Its been a challenge trying to understand why an average user would not want to browse and wander a changing mall-space - like in the real world -- entertained by unexpected discovery. Is the 3D web always going to be 5 years in the future ?

    Read the article

  • ASP.NET Membership

    - by Gary McGill
    I'd like to use the ASP.NET membership provider in the following (low-security) scenario... My company will create and administer user accounts on behalf of our clients. These accounts will likely be shared amongst several people in the client company (is that a problem?). There will be 2 types of users (2 roles): client and administrator. Administrators are the people within my company that will have special privileges to create client user accounts, etc. Clients will not be able to self-register. They also won't get to choose their own password, and they should not be able to change their password either, since that will just create confusion where several people are sharing the same account. My internal users (admins) will set the password for each client. Here's the bit I'm struggling with: if a client phones up and asks to be reminded of their password, how can my admin users find out what the password is? Can I configure the provider to store the password in clear text (or other recoverable form), and if so can I get at the password through the .NET API? As I said at the outset, this is a low-security application, and so I plan simply to show the password in the (internal) web page where I have a list of all users.

    Read the article

  • iPhone UI addSubview causing concurrency exception

    - by Eli
    This is really odd... I run my app, and while it is opening and the views are constructing I get: Collection <CALayerArray: 0x124650> was mutated while being enumerated. The code trace goes through the following: main UIApplicationMain -[UIApplication _run] CFRunLoopRunInMode CFRunLoopRunSpecific _UIApplicationHandleEvent -[UIApplication sendEvent:] -[UIApplication handleEvent:withNewEvent:] -[UIApplication _runWithURL:sourceBundleID:] -[UIApplication _performInitilizationWithURL:sourceBundleID:] -[AppDelegate applicationDidFinishLaunching:] +[Controller initializeController] //This is my own function [window addSubview: pauseMenuController.view] //This is the last point of my code it goes through -[UIView(Hierarchy) addSubview:] -[UIView(Internal) _addSubview:positioned:relativeTo:] -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:] -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] _NSFastEnumerationMutationHandler objc_exception_throw I've run the game lots and lots and lots of times and I've never seen this, then suddenly it popped up. The weird thing is that I'm not creating any other threads (that I know of) until after this code all gets called. It'll be easier for me to debug this if someone can give me some explanation of what might be getting modified while it's being accessed in a UIView. Does it have something to do with adding something to the view while it's already adding something, maybe? Any ideas?

    Read the article

  • Google Maps Api android key

    - by Cookie
    Hi, I have some trouble testing my Android application which includes the google maps API. The ooficial API example worked just fine but if I copy the code into my own project it keeps saying: "The application has stopped unexpectedly". I looked up the key in the keystore several times and registered it with google. Even tried reinstalling the SDK. Does anybody know what the problem is? Thanks in advance 05-16 14:31:11.142: ERROR/ActivityThread(662): Failed to find provider info for com.google.settings 05-16 14:31:11.150: ERROR/ActivityThread(662): Failed to find provider info for com.google.settings 05-16 14:31:12.598: ERROR/MediaPlayerService(542): Couldn't open fd for content://settings/system/notification_sound 05-16 14:31:12.624: ERROR/MediaPlayer(562): Unable to to create media player 05-16 14:31:05.098: ERROR/ActivityThread(608): Failed to find provider info for android.server.checkin 05-16 14:31:06.538: ERROR/ActivityThread(608): Failed to find provider info for android.server.checkin 05-16 14:31:06.645: ERROR/ActivityThread(608): Failed to find provider info for android.server.checkin 05-16 14:31:12.803: ERROR/AndroidRuntime(715): ERROR: thread attach failed 05-16 14:31:13.698: ERROR/ActivityThread(723): Failed to find provider info for com.google.settings 05-16 14:31:13.987: ERROR/AndroidRuntime(723): Uncaught handler: thread main exiting due to uncaught exception PS: there are some exceptions following but none pointing to my code, everything in background processes.

    Read the article

  • Is using the .NET Image Conversion enough?

    - by contactmatt
    I've seen a lot of people try to code their own image conversion techniques. It often seems to be very complicated, and ends up using GDI+ function calls, and manipulating bits of the image. This has got me wondering if I am missing something in the simplicity of .NET's image conversion call when saving an image. Here's the code I have: Bitmap tempBmp = new Bitmap("c:\temp\img.jpg"); Bitmap bmp = new Bitmap(tempBmp, 800, 600); bmp.Save(c:\temp\img.bmp, //extension depends on format ImageFormat.Bmp) //These are all the ImageFormats I allow conversion to within the program. Ignore the syntax for a second ;) ImageFormat.Gif) //or ImageFormat.Jpeg) //or ImageFormat.Png) //or ImageFormat.Tiff) //or ImageFormat.Wmf) //or ImageFormat.Bmp)//or ); This is all I'm doing in my image conversion. Just setting the location of where the image should be saved, and passing it an ImageFormat type. I've tested it the best I can, but I'm wondering if I am missing anything in this simple format conversion, or if this is sufficient?

    Read the article

  • Javascript: Controlling the order that event handlers / listeners are exeucted in

    - by LRE
    Once again the IE Monster has hit me with an odd problem. I'm writing some changes into an asp.net site I inherited a while back. One of the problems is that in some pages there are several controls that add javascript functions as handlers to the onload event (using YUI if that matters). Some of those event handlers assume certain other functions have been executed. This is well and good in Firefox and IE7 as the handlers seem to execute in order of registration. IE8 on the other hand does this backwards. I could go with some kind of double-checking approach but given the controls are present in several pages I feel that'd create even more dependencies. So I've started cooking up my own queue class that I push the functions to and can control their execution order. Then I'll register an onload handler that instructs the queue to execute in my preferred order. I'm part way through that and have started wondering 2 things: Am I going OTT? Am I reinventing the wheel? Anyone have any insights? Any clean solutions that allow me to easily enforce execution order?

    Read the article

  • Can I embed a custom font in an iPhone application?

    - by Airsource Ltd
    I would like to have an app include a custom font for rendering text, load it, and then use it with standard UIKit elements like UILabel. Is this possible? I found these links: http://discussions.apple.com/thread.jspa?messageID=8304744 http://forums.macrumors.com/showthread.php?t=569311 but these would require me to render each glyph myself, which is a bit too much like hard work, especially for multi-line text. I've also found posts that say straight out that it's not possible, but without justification, so I'm looking for a definitive answer. EDIT - failed -[UIFont fontWithName:size:] experiment I downloaded Harrowprint.tff (downloaded from here) and added it to my Resources directory and to the project. I then tried this code: UIFont* font = [UIFont fontWithName:@"Harrowprint" size:20]; which resulted in an exception being thrown. Looking at the TTF file in Finder confirmed that the font name was Harrowprint. EDIT - there have been a number of replies so far which tell me to read the documentation on X or Y. I've experimented extensively with all of these, and got nowhere. In one case, X turned out to be relevant only on OS X, not on iPhone. Consequently I am setting a bounty for this question, and I will award the bounty to the first person who provides an answer (using only documented APIs) who responds with sufficient information to get this working on the device. Working on the simulator too would be a bonus. EDIT - it appears that the bounty auto-awards to the answer with the highest nunber of votes. Interesting. No one actually provided an answer that solved the question as asked - the solution that involves coding your own UILabel subclass doesn't support word-wrap, which is an essential feature for me - though I guess I could extend it to do so.

    Read the article

  • Ruby or Rails reporting tools?

    - by Anushank
    I am looking for a report generation tool in ruby or rails which allows the user to define a template, then fetch data into the created template. I have been looking through "The Ruby Box: reporting section." There are two reporting tools I have looked at: Thin Reports: It is really good. You can create your own report template with the template editor. Then you can produce PDF reports using thinreports gems. ODF Report: You can create a template ODF file using Open Office and MS Word, and you can use that template to generate the report. Both of these solutions lack the ability to draw charts. Does anyone know of similar reporting tools that can draw charts within a given report? I have tried the RTF Ruby Library. It works, but shares the limitation that it cannot draw charts and graphs. The minimum requirements are: Able to create customizable templates. (e.g. design layout, set font size, color, embed images etc.) Able to draw tables and charts. Template could be in Docx or excel or xml or any other common file format. Report output report must be in Docx or RTF format. Thanks

    Read the article

  • How to create a variadic (with variable length argument list) function wrapper in JavaScript

    - by U-D13
    The intention is to build a wrapper to provide a consistent method of calling native functions with variable arity on various script hosts - so that the script could be executed in a browser as well as in the Windows Script Host or other script engines. I am aware of 3 methods of which each one has its own drawbacks. eval() method: function wrapper () { var str = ''; for (var i=0; i<arguments.lenght; i++) str += (str ?', ':'') + ',arguments['+i+']'; return eval('[native_function] ('+str+')'); } switch() method: function wrapper () { switch (arguments.lenght) { case 0: return [native_function] (arguments[0]); break; case 1: return [native_function] (arguments[0], arguments[1]); break; ... case n: return [native_function] (arguments[0], arguments[1], ... arguments[n]); } } apply() method: function wrapper () { return [native_function].apply([native_function_namespace], arguments); } What's wrong with them you ask? Well, shall we delve into all the reasons why eval() is evil? And also all the string concatenation... Not a solution to be labeled "elegant". One can never know the maximum n and thus how many cases to prepare. This also would strech the script to immense proportions and sin against the holy DRY principle. The script could get executed on older (pre- JavaScript 1.3 / ECMA-262-3) engines that don't support the apply() method. Now the question part: is there any another solution out there?

    Read the article

  • Issue with IHttpHandler and relative URLs

    - by vtortola
    Hi, I've developed a IHttpHandler class and I've configured it as verb="*" path="*", so I'm handling all the request with it in an attempt of create my own REST implementation for a test web site that generates the html dynamically. So, when a request for a .css file arrives, I've to do something like context.Response.WriteFile(Server.MapPath(url)) ... same for pictures and so on, I have to response everything myself. My main issue, is when I put relative URLs in the anchors; for example, I have main page with a link like this <a href="page1">Go to Page 1</a> , and in Page 1 I have another link <a href="page2">Go to Page 2</a>. Page 1 and 2 are supposed to be at the same level (http://host/page1 and http://host/page2, but when I click in Go to Page 2, I got this url in the handler: ~/page1/~/page2 ... what is a pain, because I have to do an url = url.SubString(url.LastIndexOf('~')) for clean it, although I feel that there is nothing wrong and this behavior is totally normal. Right now, I can cope with it, but I think that in the future this is gonna bring me some headache. I've tried to set all the links with absolute URLs using the information of context.Request.Url, but it's also a pain :D, so I'd like to know if there is a nicer way to do these kind of things. Don't hesitate in giving me pretty obvious responses because I'm pretty new in web development and probably I'm skipping something basic about URLs, Http and so on. Thanks in advance and kind regards.

    Read the article

  • Java SAX ContentHandler to create new objects for every root node

    - by behrk2
    Hello everyone, I am using SAX to parse some XML. Let's say I have the following XML document: <queue> <element A> 1 </element A> <element B> 2 </element B> </queue> <queue> <element A> 1 </element A> <element B> 2 </element B> </queue> <queue> <element A> 1 </element A> <element B> 2 </element B> </queue> And I also have an Elements class: public static Elements { String element; public Elements() { } public void setElement(String element){ this.element = element; } public String getElement(){ return element; } } I am looking to write a ContentHandler that follows the following algorithm: Vector v; for every <queue> root node { Element element = new Element(); for every <element> child node{ element.setElement(value of current element); } v.addElement(element); } So, I want to create a bunch of Element objects and add each to a vector...with each Element object containing its own String values (from the child nodes found within the root nodes. I know how to parse out the elements and all of those details, but can someone show me a sample of how to structure my ContentHandler to allow for the above algorithm? Thanks!

    Read the article

  • RubyGems installation errors when using 'sudo' or not

    - by Kenny Peng
    I have a machine that is running Ubuntu Hardy, which provides its own RubyGems package. Unfortunately that version of RubyGems (1.1.1) is too old to do anything useful with, so I decided to manually update RubyGems to the current version (1.3.6). That part went smoothly, and if I do gem -v, I get 1.3.6 which is expected. The problem is when I try to do: sudo gem install rack, it returns this error: ERROR: While executing gem ... (Errno::EACCES) Permission denied - /home/username/.gem Usually when I install gems as root, it knows to install it into /usr/lib/ruby/gems, so why is it checking my home directory at all? Another quirk is when I do gem install rack (not as root), it says: ERROR: While executing gem ... (Gem::FilePermissionError) You don't have write permissions into the /usr/lib/ruby/gems/1.8 directory. which is where I want it to go. I've already tried clearing source_caches, trying different versions of RubyGems (1.3.5), forcing installation into /usr/lib with -i to no avail. Any ideas on why RubyGems is so insistent on checking my /home directory when installing as root?

    Read the article

  • Unit testing JSON output module, best practices

    - by Banang
    I am currently working on a module that takes one of our business objects and returns a json representation of that object to the caller. Due to limitations in our environment I am unable to use any existing json writer, so I have written my own, which is then used by the business object writer to serialize my objects. The json writer is tested in a way similar to this @Test public void writeEmptyArrayTest() { String expected = "[ ]"; writer.array().endArray(); assertEquals(expected, writer.toString()); } which is only manageable because of the small output each instruction produces, even though I keep feeling there must be a better way. The problem I am now facing is writing tests for the object writer module, where the output is much larger and much less manageable. The risk of spelling mistakes in the expected strings mucking up my tests seem too great, and writing code in this fashion seems both silly and unmanageable in a long term perspective. I keep feeling like I want to write tests to ensure that my tests are behaving correctly, and this feeling worries me. Therefore, is there a better way of doing this? Surely there must be? Does anyone know of any good literature in regard to this specific case (doesn't have to be json, but you know what I mean)? Grateful for all help.

    Read the article

  • Project setup for creating third party libraries for Android

    - by Jarle Hansen
    Hi all, I am creating a library for Android that others can include in their own project. So far I have been working on it as a normal Java project with JDK 1.6 setup as system library. This works just fine in Eclipse when I add the android.jar. The issue comes when I try to my build script. I am running Gradle and doing a normal compile and test build cycle. My thoughts were that it does not matter if I compile it with a normal JDK, since this is not a standalone application. The benefits by creating a normal Java project is that Gradle does support this much better. My project also does not contain any UI at all. However, the problem is that of course android.jar and the JDK contains lots of the same classes and I think that this is what messes up my build script. Everything crashes when running the tests (the tests are in the same project under src/test/java). My question is, how should I create this project that is meant to be included in Android projects as a third party library? Should I create it as an Android project in Eclipse even though I am only creating a library that does not use any of the UI features? Also, should the tests be in a separate project? Thanks for all responses!

    Read the article

  • Question about MySQLdb, OS X 10.5, and authentication

    - by timpone
    I'm a noob at Python and have been having problems with MySQLdb and OS X Leopard 10.5. I have a php app that is doing db access just fine with pdo but also want to access with Python. When I use the same credentials with MySQLdb as php, I get the following error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_db'@'localhost' (using password: YES)") The authentication piece works fine on my ubuntu server (installed via apt-get) implying that it is something specific to my OS X MySQLdb install. Looking at some postings, I thought it would be my local build of MySQLdb which seems to be problematic with OS X. But I am able to import fine: Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb >>> Also, wanting to create a positive, I am able to access and return results from a database tilted test_something (which presumably bypasses the MySQL's authtentication - not sure exactly how though). Trying to figure out a little more what is going on, I turn on logging for mysql and get the following (added my own comments): 100609 19:09:45 3 Connect Access denied for user 'arc_db'@'localhost' (using password: YES) //not worked 100609 19:10:02 4 Connect arc_db@localhost on arc_development //did work I'm not really sure what the 3 or 4 means but presumably a sucess or failue. So, I guess what would be the next step? Am I doing some obvious stupid python mistake (very likely)? Is there a better way for me to prove that this should / can be working? Is there any way to determine what MySQLdb is sending exactly in its authentication message to MySQL? thanks

    Read the article

  • iPhone app: How to implement in-app purchased game levels

    - by Wonderflonium
    So, I understand that it's possible to set up in-app purchases for iPhone apps to purchase non-consumables like game levels. I understand the logic behind the purchase part, but what I don't understand is, how can I deliver the new game level. For example: I build an app that contains the first level and they purchase additional levels. Is it better to build all the other levels into the app and whenever they purchase the app, it unlocks it with a plist entry or something? That doesn't seem very update-able to me. Every time I come up with a new level, I'd have to update the app. So, what I don't understand then, is what is how do I package up a level and download it as a separate entity that can accessed by the game? Would the level just be some XML with images in a ZIP folder or something? How does the level get added to the game? What are best practices for this type of thing? I Googled and have found NOTHING about this. I'm a little bit confused by the concept and any help would be appreciated. I'm not looking for someone to write the game for me, I just need pointed in the right direction so I can develop it on my own.

    Read the article

  • ASP MVC: Submitting a form with nested user controls

    - by Nigel
    I'm fairly new to ASP MVC so go easy :). I have a form that contains a number of user controls (partial views, as in System.Web.Mvc.ViewUserControl), each with their own view models, and some of those user controls have nested user controls within them. I intended to reuse these user controls so I built up the form using a hierarchy in this way and pass the form a parent view model that contains all the user controls' view models within it. For example: Parent Page (with form and ParentViewModel) -->ChildControl1 (uses ViewModel1 which is passed from ParentViewModel.ViewModel1 property) -->ChildControl2 (uses ViewModel2 which is passed from ParentViewModel.ViewModel2 property) -->ChildControl3 (uses ViewModel3 which is passed from ViewModel2.ViewModel3 property) I hope this makes sense... My question is how do I retrieve the view data when the form is submitted? It seems the view data cannot bind to the ParentViewModel: public string Save(ParentViewModel viewData)... as viewData.ViewModel1 and viewData.ViewModel2 are always null. Is there a way I can perform a custom binding? Ultimately I need the form to be able to cope with a dynamic number of user controls and perform an asynchronous submission without postback. I'll cross those bridges when I come to them but I mention it now so any answer won't preclude this functionality. Many thanks.

    Read the article

  • What does this code from AuthKit do? (where are these functions and methods defined?)

    - by Beau Simensen
    I am trying to implement my own authentication method for AuthKit and am trying to figure out how some of the built-in methods work. In particular, I'm trying to figure out how to update the REMOTE_USER for environ correctly. This is how it is handled inside of authkit.authenticate.basic but it is pretty confusing. I cannot find anyplace where REMOTE_USER and AUTH_TYPE are defined. Is there something strange going on here and if so, what is it? def __call__(self, environ, start_response): environ['authkit.users'] = self.users result = self.authenticate(environ) if isinstance(result, str): AUTH_TYPE.update(environ, 'basic') REMOTE_USER.update(environ, result) return self.application(environ, start_response) There are actually a number of all uppercase things like this that I cannot find a definition for. For example, where does AUTHORIZATION come from below: def authenticate(self, environ): authorization = AUTHORIZATION(environ) if not authorization: return self.build_authentication() (authmeth, auth) = authorization.split(' ',1) if 'basic' != authmeth.lower(): return self.build_authentication() auth = auth.strip().decode('base64') username, password = auth.split(':',1) if self.authfunc(environ, username, password): return username return self.build_authentication() I feel like maybe I am missing some special syntax handling for the environ dict, but it is possible that there is something else really weird going on here that isn't immediately obvious to someone as new to Python as myself.

    Read the article

  • Subversion versus Vault

    - by WebDude
    I'm currently reviewing the benefits of moving from SVN to a SourceGear Vault. Has anyone got advice or a link to a detailed comparison between the two? Bear in mind I would have to move my current Source Control system across which works strongly in SVN's favor Here is some info I have found out thus far from my own investigations. I have been taking some time tests between the two and vault seems to perform most operations much faster. Time tests used the same server as the repository, the same workstation client, and the same project. Time Comparisons SVN Add/Commit    12:30 Get Latest Revision    5:35 Tagging/Labelling    0:01 Branching    N/A - I don't think true branching exists in SVN Vault Add/Commit    4:45 Get Latest Revision    0:51 Tagging/Labelling    0:30 Branching    3:23 (can't get this to format correctly) I also found an online source comparing some other points. This is the kind of information i'm looking for. Usage Comparisons Subversion is edit/merge/commit only. Vault allows you to do either edit/merge/commit or checkout/edit/checkin. Vault looks and acts just like VSS, which makes the learning curve effectively zero for VSS users. Vault has a VS plugin, but it only works if you're going to run in checkout-mode. Subversion has clients for pretty much every OS you can imagine; Vault has a GUI client for Windows and a command line client for Mono. Both will support remote work, since both use HTTP as their transport (Subversion uses extended DAV, Vault uses SOAP). Subversion installation, especially w/ Apache, is more complex. Subversion has a lot of third party support. Vault has just a few things. My question Has anyone got advice or a link to a detailed comparison between the two?

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >