Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 582/674 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • R plotting multiple histograms on single plot to .pdf as a part of R batch script

    - by Bryce Thomas
    I am writing R scripts which play just a small role in a chain of commands I am executing from a terminal. Basically, I do much of my data manipulation in a Python script and then pipe the output to my R script for plotting. So, from the terminal I execute commands which look something like $python whatever.py | R CMD BATCH do_some_plotting.R. This workflow has been working well for me so far, though I have now reached a point where I want to overlay multiple histograms on the same plot, inspired by this answer to another user's question on Stackoverflow. Inside my R script, my plotting code looks like this: pdf("my_output.pdf") plot(hist(d$original,breaks="FD",prob=TRUE), col=rgb(0,0,1,1/4),xlim=c(0,4000),main="original - This plot is in beta") plot(hist(d$minus_thirty_minutes,breaks="FD",prob=TRUE), col=rgb(1,0,0,1/4),add=T,xlim=c(0,4000),main="minus_thirty_minutes - This plot is in beta") Notably, I am using add=T, which is presumably meant to specify that the second plot should be overlaid on top of the first. When my script has finished, the result I am getting is not two histograms overlaid on top of each other, but rather a 3-page PDF whose 3 individual plots contain the titles: i) Histogram of d$original ii) original - This plot is in beta iii) Histogram of d$minus_thirty_minutes So there's two points here I'm looking to clarify. Firstly, even if the plots weren't overlaid, I would expect just a 2-page PDF, not a 3-page PDF. Can someone explain why I am getting a 3-page PDF? Secondly, is there a correction I can make here somewhere to get just the two histograms plotted, and both of them on the same plot (i.e. 1-page PDF)? The other Stackoverflow question/answer I linked to in the first paragraph did mention that alpha-blending isn't supported on all devices, and so I'm curious whether this has anything to do with it. Either way, it would be good to know if there is a R-based solution to my problem or whether I'm going to have to pipe my data into a different language/plotting engine.

    Read the article

  • jquery calculation sum two different type of item

    - by st4n
    I'm writing a script like the example shown in the demo where All of the "Total" values (Including the "Grand Total") are Automatically Calculated using the calc () method. at this link: But I have some fields in which to apply the equation qty * price, and others where I want to do other operations .. you can tell me how? thank you very much i try with this, but it is a very stupid code .. and the grandTotal .. not sum the two different fields: function recalc() { $("[id^=total_item]").calc("qty * price", { qty: $("input[name^=qty_item_]"), price: $("[id^=price_item_]") }, function (s){ // return the number as a dollar amount return "$" + s.toFixed(2); }, function ($this){ // sum the total of the $("[id^=total_item]") selector var sum = $this.sum(); $("#grandTotal").text( // round the results to 2 digits "$" + sum.toFixed(2) ); }); $("[id^=total_otheritem]").calc("qty1 / price1", { qty1: $("input[name^=qty_other_]"), price1: $("[id^=price_other_]") }, function (s){ // return the number as a dollar amount return "$" + s.toFixed(2); }, function ($this){ var sum = $this.sum(); $("#grandTotal").text( // round the results to 2 digits "$" + sum.toFixed(2) ); }); }

    Read the article

  • Django extending user model and displaying form

    - by MichalKlich
    Hello, I am writing website and i`d like to implement profile managment. Basic thing would be to edit some of user details by themself, like first and last name etc. Now, i had to extend User model to add my own stuff, and email address. I am having troubles with displaying form. Example will describe better what i would like achieve. This is mine extended user model. class UserExtended(models.Model): user = models.ForeignKey(User, unique=True) kod_pocztowy = models.CharField(max_length=6,blank=True) email = models.EmailField() This is how my form looks like. class UserCreationFormExtended(UserCreationForm): def __init__(self, *args, **kwargs): super(UserCreationFormExtended, self).__init__(*args, **kwargs) self.fields['email'].required = True self.fields['first_name'].required = False self.fields['last_name'].required = False class Meta: model = User fields = ('username', 'first_name', 'last_name', 'email') It works fine when registering, as i need allow users to put username and email but when it goes to editing profile it displays too many fields. I would not like them to be able to edit username and email. How could i disable fields in form? Thanks for help.

    Read the article

  • PHP question about global variables and form requests

    - by user220201
    Hi, This is probably a stupid question but will ask anyway sine I have no idea. I have written basic php code which serve forms. Say I have a login page and I serve it using the login.php page and it will be called in the login.html page like this - <form action="login.php" method="post"> By this it is also implied that every POST needs its own php file, doesn't it? This kind of feels weird. Is there a way to have a single file, say code.php, and just have each of the forms as functions instead? EDIT: Specifically, say I have 5 forms that are used one after the other in my application. Say after login the user does A, B, C and D tasks each of which are sent to the server as a POST request. So instead of having A.php, B.php, C.php and D.php I would like to have a single code.php and have A(), B(), C() and D() as functions. Is there a way to do this? Also on the same note, how do I deal with say a global array (e.g. an array of currently logged in users) across multiple forms? I want to do this without writing to a DB. I know its probably better to write to a DB and query but is it even possible to do it with a global array? The reason I was thinking about having all the form functions in one file is to use a global array. Thanks, - Pav

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • If I use Unicode on a ISO-8859-1 site, how will that be interpreted by a browser?

    - by grg-n-sox
    So I got a site that uses ISO-8859-1 encoding and I can't change that. I want to be sure that the content I enter into the web app on the site gets parsed correctly. The parser works on a character by character basis. I also cannot change the parser, I am just writing files for it to handle. The content in my file I am telling the app to display after parsing contains Unicode characters (or at least I assume so, even if they were produced by Windows Alt Codes mapped to CP437). Using entities is not an option due to the character by character operation of the parser. The only characters that the parser escapes upon output are markup sensitive ones like ampersand, less than, and greater than symbols. I would just go ahead and put this through to see what it looks like, but output can only be seen on a publishing, which has to spend a couple days getting approved and such, and that would be asking too much for just a test case. So, long story short, if I told a site to output ?ÇÑ¥?? on a site with a meta tag stating it is supposed to use ISO-8859-1, will a browser auto-detect the Unicode and display it or will it literally translate it as ISO-8859-1 and get a different set of characters?

    Read the article

  • Why can't I handle a KeyboardInterrupt in python?

    - by Josh
    I'm writing python 2.6.6 code on windows that looks like this: try: dostuff() except KeyboardInterrupt: print "Interrupted!" except: print "Some other exception?" finally: print "cleaning up...." print "done." dostuff() is a function that loops forever, reading a line at a time from an input stream and acting on it. I want to be able to stop it and clean up when I hit ctrl-c. What's happening instead is that the code under except KeyboardInterrupt: isn't running at all. The only thing that gets printed is "cleaning up...", and then a traceback is printed that looks like this: Traceback (most recent call last): File "filename.py", line 119, in <module> print 'cleaning up...' KeyboardInterrupt So, exception handling code is NOT running, and the traceback claims that a KeyboardInterrupt occurred during the finally clause, which doesn't make sense because hitting ctrl-c is what caused that part to run in the first place! Even the generic except: clause isn't running. EDIT: Based on the comments, I replaced the contents of the try: block with sys.stdin.read(). The problem still occurs exactly as described, with the first line of the finally: block running and then printing the same traceback.

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Return value mapping on Stored Procedures in Entity Framework

    - by Yucel
    Hi, I am calling a stored procedure with EntityFramework. But custom property that i set in partial entity class is null. I have Entities in my edmx (I called edmx i dont know what to call for this). For example I have a "User" table in my database and so i have a "User" class on my Entity. I have a stored procedure called GetUserById(@userId) and in this stored procedure i am writing a basic sql statement like below "SELECT * FROM Users WHERE Id=@userId" in my edmx i make a function import to call this stored procedure and set its return value to Entities (also select User from dropdownlist). It works perfectly when i call my stored procedure like below User user = Context.SP_GetUserById(123456); But i add a custom new column to stored procedure to return one more column like below SELECT *, dbo.ConcatRoles(U.Id) AS RolesAsString FROM membership.[User] U WHERE Id = @id Now when i execute it from SSMS new column called RolesAsString appear in result. To work this on entity framework i added a new property called RolesAsString to my User class like below. public partial class User { public string RolesAsString{ get; set; } } But this field isnt filled by stored procedure when i call it. I look to the Mapping Detail windows of my SP_GetUserById there isnt a mapping on this window. I want to add but window is read only i cant map it. I looked to the source of edmx cant find anything about mapping of SP. How can i map this custom field?

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • Copying to binary file row of a matrix

    - by Flethuseo
    Hi everyone I want to write each row of a matrix to a binary file. I try writing it like this: vector< vector<uint32_t> > matrix; ... for(size_t i = 0; i < matrix.size(); ++i) ofile->write( reinterpret_cast<char*>(&matrix[i]), sizeof(uint32_t*sizeof(matrix[i])) ); { for(size_t j = 0; j < numcols; ++j) { std::cout << left << setw(10) << matrix[i][j]; } cout << endl; } but it doesn't work, I get garbage numbers. Any help appreciated, Ted.

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • Efficiency of manually written loops vs operator overloads (C++)

    - by Sagekilla
    Hi all, in the program I'm working on I have 3-element arrays, which I use as mathematical vectors for all intents and purposes. Through the course of writing my code, I was tempted to just roll my own Vector class with simple +, -, *, /, etc overloads so I can simplify statements like: for (int i = 0; i < 3; i++) r[i] = r1[i] - r2[i]; // becomes: r = r1 - r2; Which should be more or less identical in generated code. But when it comes to more complicated things, could this really impact my performance heavily? One example that I have in my code is this: Manually written version: for (int j = 0; j < 3; j++) { p.vel[j] = p.oldVel[j] + (p.oldAcc[j] + p.acc[j]) * dt2 + (p.oldJerk[j] - p.jerk[j]) * dt12; p.pos[j] = p.oldPos[j] + (p.oldVel[j] + p.vel[j]) * dt2 + (p.oldAcc[j] - p.acc[j]) * dt12; } Using a Vector class with operator overloads: p.vel = p.oldVel + (p.oldAcc + p.acc) * dt2 + (p.oldJerk - p.jerk) * dt12; p.pos = p.oldPos + (p.oldVel + p.vel) * dt2 + (p.oldAcc - p.acc) * dt12; I am compiling my code for maximum possible speed, as it's extremely important that this code runs quickly and calculates accurately. So will me relying on my Vector's for these sorts of things really affect me? For those curious, this is part of some numerical integration code which is not trivial to run in my program. Any insight would be appreciated, as would any idioms or tricks I'm unaware of.

    Read the article

  • Java Input/Output streams for unnamed pipes created in native code?

    - by finrod
    Is there a way to easily create Java Input/Output streams for unnamed pipes created in native code? Motivation: I need my own implementation of the Process class. The native code spawns me a new process with subprocess' IO streams redirected to unnamed pipes. Problem: The file descriptors for correct ends of those pipes make their way to Java. At this point I get stuck as I cannot create a new FileDescriptor which I could pass to FileInput/FileOutput stream. I have used reflection to get around the problem and got communication with a simple bouncer sub-process running. However I have a notion that it is not the cleanest way to go. Have you used this approach? Do you see any problems with this approach? (the platform will never change) Searching around the internets revealed similar solution using native code. Any thoughts before I dive into heavy testing of this approach are very welcome. I would like to give a shot to existing code before writing my own IO stream implementations... Thank you.

    Read the article

  • Create a real time web application using .net framework and azure - very confused

    - by test
    Let us say I would like a simple (yet complex) web application where there is continuous READING AND WRITING to the sql azure database. Let us say I am tracking a location, and I would like it to be updated very frequently (lets take the worst case: 1 second). From the little knowledge I have, I think that this involves the use of the database to continuously write the location to the database, and continuously read from the database to update another person through a website. Do you please have any suggestion which technologies can I use? Is there a simple way? I heard about node.js, signalR. I have no idea how to use them, if they are really what I need. the last tutorial I checked out simply uses a while(true) loop..but I don't think that that's something good to keep a thread continuously busy... Do I have to create some background task? Do I have to create some web service? This is a school project and I wish not to go for the most difficult option, but if there is some sort of solution, challenge accepted :) Can you please help me? Since I have asked many questions here and yet I have no solution in mind

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Defining default value in combobox in CakePHP

    - by Keyur
    <?php echo $form->create('admin_merchant_form', array('action' => '#')); echo $form->input('company_name', array('label' => 'Company Name')); echo $form->input('ac_owner', array('label' => 'Account Owner', 'options' => array('a','b','b'), 'default' => $merchant_select)); echo $form->end('Update'); ?> This is CakePHP code to generate a form with one combobox containing the values "a,b,c" and assigning the default value as $merchant_select which is numerical data. Now the problem is when I assign like 'default'=1 it returns 'b' in the combobox as default value but when writing 'default' = $merchant_select the combobox shows only the first value which is 'a'. The $merchant_select variable is assigned a numeric value equal to merchant's id which 1,2 or 3 when I select any row in the grid. And I also have JavaScript code which alerts with the merchant value when I select any row in the grid so the numeric data is definitely assigned to the $merchant_select variable.

    Read the article

  • Spring MVC: How to get the remaining path after the controller path?

    - by Willis Blackburn
    I've spent over an hour trying to find the answer to this question, which seems like it should reflect a common use case, but I can't figure it out! Basically I am writing a file-serving controller using Spring MVC. The URLs are of the format http://www.bighost.com/serve/the/path/to/the/file.jpg, in which the part after "/serve" is the path to the requested file, which may have an arbitrary number of path segments. I have a controller like this: @Controller class ServerController { @RequestMapping(value = "/serve/???") public void serve(???) { } } What I am trying to figure out is: What do I use in place of "???" to make this work? I have two theories about how this should work. The first theory is that I could replace the first "???" in the RequestMapping with a path variable placeholder that has some special syntax meaning "capture to the end of the path." If a regular placeholder looks like "{path}" then maybe I could use "{path:**}" or "{path:/}" or something like that. Then I could use a @PathVariable annotation to refer to the path variable in the second "???". The other theory is that I could replace the first "???" with "**" (match anything) and that Spring would give me an API to obtain the remainder of the path (the part matching the "**"). But I can't find such an API. Thanks for any help you can provide!

    Read the article

  • Request for advice about class design, inheritance/aggregation

    - by Lasse V. Karlsen
    I have started writing my own WebDAV server class in .NET, and the first class I'm starting with is a WebDAVListener class, modelled after how the HttpListener class works. Since I don't want to reimplement the core http protocol handling, I will use HttpListener for all its worth, and thus I have a question. What would the suggested way be to handle this: Implement all the methods and properties found inside HttpListener, just changing the class types where it matters (ie. the GetContext + EndGetContext methods would return a different class for WebDAV contexts), and storing and using a HttpListener object internally Construct WebDAVListener by passing it a HttpListener class to use? Create a wrapper for HttpListener with an interface, and constrct WebDAVListener by passing it an object implementing this interface? If going the route of passing a HttpListener (disguised or otherwise) to the WebDAVListener, would you expose the underlying listener object through a property, or would you expect the program that used the class to keep a reference to the underlying HttpListener? Also, in this case, would you expose some of the methods of HttpListener through the WebDAVListener, like Start and Stop, or would you again expect the program that used it to keep the HttpListener reference around for all those things? My initial reaction tells me that I want a combination. For one thing, I would like my WebDAVListener class to look like a complete implementation, hiding the fact that there is a HttpListener object beneath it. On the other hand, I would like to build unit-tests without actually spinning up a networked server, so some kind of mocking ability would be nice to have as well, which suggests I would like the interface-wrapper way. One way I could solve this would be this: public WebDAVListener() : WebDAVListener(new HttpListenerWrapper()) { } public WebDAVListener(IHttpListenerWrapper listener) { } And then I would implement all the methods of HttpListener (at least all those that makes sense) in my own class, by mostly just chaining the call to the underlying HttpListener object. What do you think? Final question: If I go the way of the interface, assuming the interface maps 1-to-1 onto the HttpListener class, and written just to add support for mocking, is such an interface called a wrapper or an adapter?

    Read the article

  • jQuery: Stopping a periodic ajax call?

    - by Legend
    I am writing a small jQuery plugin to update a set of Divs with content obtained using Ajax calls. Initially, let's assume we have 4 divs. I am doing something like this: (function($) { .... .... //main function $.fn.jDIV = { init: function() { ... ... for(var i = 0; i < numDivs; i++) { this.query(i); } this.handlers(); }, query: function(divNum) { //Makes the relevant ajax call }, handlers: function() { for(var i = 0; i < numDivs; i++) { setInterval("$.fn.jDIV.query(" + i + ")", 5000); } } }; })(jQuery); I would like to be able to enable and disable a particular ajax query. I was thinking of adding a "start" and "stop" instead of the "handlers" function and subsequently storing the setInterval handler like this: start: function(divNum) { divs[divNum] = setInterval("$.fn.jDIV.query(" + i + ")", 5000); }, stop: function(divNum) { clearInterval(divs[divNum]); } I did not use jQuery to setup and destroy the event handlers. Is there a better approach (perhaps using more of jQuery) to achieve this?

    Read the article

  • Evaluating expressions using Visual Studio 2005 SDK rather than automation's Debugger::GetExpression

    - by brone
    I'm looking into writing an addin (or package, if necessary) for Visual Studio 2005 that needs watch window type functionality -- evaluation of expressions and examination of the types. The automation facilities provide Debugger::GetExpression, which is useful enough, but the information provided is a bit crude. From looking through the docs, it sounds like an IDebugExpressionContext2 would be more useful. With one of these it looks as if I can get more information from an expression -- detailed information about the type and any members and so on and so forth, without having everything come through as strings. I can't find any way of actually getting a IDebugExpressionContext2, though! IDebugProgramProvider2 sort of looks relevant, in that I could start with IDebugProgramProvider2::GetProviderProcessData and then slowly drill down until reaching something that can supply my expression context -- but I'll need to supply a port to this, and it's not clear how to retrieve the port corresponding to the current debug session. (Even if I tried every port, it's not obvious how to tell which port is the right one...) I'm becoming suspicious that this simply isn't a supported use case, but with any luck I've simply missed something crashingly obvious. Can anybody help?

    Read the article

  • Use properties or methods to expose business rules in C#?

    - by Val
    I'm writing a class to encapsulate some business rules, each of which is represented by a boolean value. The class will be used in processing an InfoPath form, so the rules get the current program state by looking up values in a global XML data structure using XPath operations. What's the best (most idiomatic) way to expose these rules to callers -- properties or public methods? Call using properties Rules rules = new Rules(); if ( rules.ProjectRequiresApproval ) { // get approval } else { // skip approval } Call using methods Rules rules = new Rules(); if ( rules.ProjectRequiresApproval() ) { // get approval } else { // skip approval } Rules class exposing rules as properties public class Rules() { private int _amount; private int threshold = 100; public Rules() { _amount = someExpensiveXpathOperation; } // rule property public bool ProjectRequiresApproval { get { return _amount < threshold } } } Rules class exposing rules as methods public class Rules() { private int _amount; private int threshold = 100; public Rules() { _amount = someExpensiveXpathOperation; } // rule method public bool ProjectRequiresApproval() { return _amount < threshold; } } What are the pros and cons of one over the other?

    Read the article

  • How can I do something when a runloop event is done processing?

    - by quixoto
    I have some processing in my Cocoa app that sometimes ends up calling through a hierarchy of data to do a bunch of work as the result of an event. Each small piece creates and destroys some resources. I don't want those resources around most of the time, but I would like to find a smart way of creating them before all the work and killing them at the end. Short of making those buffers etc available globally from the "parent" or elsewhere, is there a way to know locally in some code when an event loop run has ended? Then I could create them if they're not there, and keep them until the run loop ends, reusing them for any subsequent calls before that time. EDIT: I'm not looking for suggestions on how to restructure my code, which I may do anyways. This issue just brought up the question for me of how to know when the runloop is done. If I were writing in, I dunno, Javascript, I'd use a setTimeout with zero to accomplish end-event cleanup. I suppose an NSTimer with an interval of zero might accomplish this too, but wondering if there's something cleaner. Thanks.

    Read the article

  • Is there a way to automatically load navigational property using the .NET Entity Framework?

    - by René Wolferink
    Stepping away more and more from writing SQL for my applications, I decided to give the Entity Framework a try. However, I've run into something I believe is causing me to write more code than I think is strictly necessary. When I accessed some navigational properties, I discovered that all many-to-one relations (simple references) were null and all one-to-many and many-to-many relations (EntityCollections) were empty. For example: I have a User with a reference to a Group. When I have retieved a User, by using a simple select-by-id, the Group property is null. If I want to access the Group I have to manually load it (using User.GroupReference.Load()). So I added a GetGroup() method in User which checks whether the Group is loaded already and, if not, does so and then returns the Group. Now this will result in a lot of highly similar methods for all navigational properties. And it all results in the navigational properties not being used, only my custom-made Get"PropertyName"() method's are now being used. I don't want to expand my queries (linq to entities) to immediately load all these properties, because it's not always known at first what is needed. And besides, it would cause a lot of queries to have to be made. Is there a way to configure the Entity Framework to load these objects when they happen to not be present? So when I access User.Group and the group is not loaded yet, it is loaded automatically? Or am I stuck using my own Get"PropertyName"() method's as long as I'm trying to load objects only on demand (or "just-in-time")? Some extra info: I'm using VS2008 SP1 with .NET 3.5 SP1. The Entity Framework I'm using is the one that got shipped with it.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >