Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 404/1725 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • POST XML to server, receive PDF

    - by Shaggy Frog
    Similar to this question, we are developing a Web app where the client clicks a button to receive a PDF from the server. Right now we're using the .ajax() method with jQuery to POST the data the backend needs to generate the PDF (we're sending XML) when the button is pressed, and then the backend is generating the PDF entirely in-memory and sending it back as application/pdf in the HTTP response. One answer to that question requires the server-side save the PDF to disk so it can give back a URL for the client to GET. But I don't want the backend caching content at all. The other answer suggests the use of a jQuery plugin, but when you look at its code, it's actually generating a form element and then submitting the form. That method will not work for us since we are sending XML data in the body of the HTTP request. Is there a way to have the browser open up the PDF without caching the PDF server-side, and without requiring us to throw out our send-data-to-the-server-using-XML solution? (I'd like the browser to behave like it does when a form element is submitted -- a POST is made and then the browser looks at the Content-type header to determine what to do next, like load the PDF in the browser window, a la Safari)

    Read the article

  • Can I specify default value?

    - by atch
    Why is it that for user defined types when creating an array of objects every element of this array is initialized with default ctor but when I create built-in type this isn't the case? And second question: is it possible to specify default value to be used while initialize? Something like this (not valid): char* p = new char[size]('\0'); And another question in this topic while I'm with arrays. I suppose that when creating an array of user defined type and knowing the fact that every elem. of this array will be initialized with default value firstly why? If arrays for built in types do not initialize their elems. with their dflts why do they do it for UDT, and secondly: is there a way to switch it off/avoid/circumvent somehow? It seems like bit of a waste if I for example have created an array with size 10000 and then 10000 times dflt ctor will be invoked and I will (later on) overwrite this values anyway. I think that behaviour should be consistent, so either every type of array should be initialized or none. And I think that the behaviour for built-in arrays is more appropriate.

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • asp.net mvc ajax helper/post additional data w/o jquery.

    - by Jopache
    I would like to use the ajax helper to create ajax requests that send additional, dynamic data with the post (for example, get the last element with class = blogCommentDateTime and send the value of that last one to the controller which will return only blog comments after it). I have successfully done so with the help of jQuery Form plugin like so: $(document).ready(function () { $("#addCommentForm").submit(function () { var lastCommentDate = $(".CommentDateHidden:last").val(); var lastCommentData = { lastCommentDateTicks: lastCommentDate }; var formSubmitParams = { data: lastCommentData, success: AddCommentResponseHandler } $("#addCommentForm").ajaxSubmit(formSubmitParams); return false; }); This form was created with html.beginform() method. I am wondering if there is an easy way to do this using the ajax.beginform() helper? When I try to use the same code but replace html.beginform() with ajax.beginform(), when i try to submit the form, I am issuing 2 posts (which is understandable, one being taken care of by the helper, the other one by me with the JS above. I can't create 2 requests, so this option is out) I tried getting rid of the return false and changing ajaxSubmit() to ajaxForm() so that it would only "prepare" the form, and this leads in only one post, but it does not include the extra parameter that I defined, so this is worthless as well. I then tried keeping the ajaxForm() but calling that whenever the submit button on the form gets clicked rather than when the form gets submitted (I guess this is almost the same thing) and that results in 2 posts also. The biggest reason I am asking this question is that I have run in to some issues with the javascript above and using mvc validation provided by the mvc framework (which i will set up another question for) and would like to know this so I can dive further in to my validation issue.

    Read the article

  • What can I read from the iPad Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires a third device and would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push my OSC-out FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • JSF: Avoid nested update calls

    - by dhroove
    I don't know if I am on right track or not still asking this question. I my JSF project I have this tree structure: <p:tree id="resourcesTree" value="#{someBean.root}" var="node" selectionMode="single" styleClass="no-border" selection="#{someBean.selectedNode}" dynamic="true"> <p:ajax listener="#{someBean.onNodeSelect}" update=":centerPanel :tableForm :tabForm" event="select" onstart="statusDialog.show();" oncomplete="statusDialog.hide();" /> <p:treeNode id="resourcesTreeNode" > <h:outputText value="#{node}" id="lblNode" /> </p:treeNode> </p:tree> I have to update this tree after I added something or delete something.. But whenever I update this still its nested update also call I mean to say it also call to update these components ":centerPanel :tableForm :tabForm"... This give me error that :tableForm not found in view because this forms load in my central panel and this tree is in my right panel.. So when I am doing some operation on tree is it not always that :tableForm is in my central panel.. (I mean design is something like this only) So now my question is that can I put some condition or there is any way so that I can specify when to update nested components also and when not.... In nut shell is there any way to update only :resoucesTree is such a way that nested updates are not called so that I can avoid error... Thanks in advance.

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • VirtualBox limits size of .js file, that can be included from guest additions folder?

    - by c69
    This question might belong to SuperUser, but i'll try to ask it here anyway, because i believe, some web developers might encountered this weird behavior. When testing a site for IE8/winXP compatibility on VirtualBox i run into weird issue of $ is undefined, which is caused by jQuery (and jQuery UI) being not included, when referenced by relative path, which resolves to file:/// url. Seemingly because their size was too big (above 200KB). Simply replacing links to those 2 big files to http:// ones solved the issue for me. But here is the question: why did this happen ? is it a misconfiguration ? a bug ? a known design decision ? Details: VirtualBox 4.1.8 host os: win7 64bit, guest os: xp sp3 32 bit guest additions installed, page was launched from VB shared folder the bug was manifesting itself in all browsers (even in opera, which ignores ie security settings, afaik) ie configuration is default script was included like this: <script type="text/javascript" src="js/libs/jquery/jquery-1.7.2.js"> exact size limit was not deducted.

    Read the article

  • How to create more complex Lucene query strings?

    - by boris callens
    This question is a spin-off from this question. My inquiry is two-fold, but because both are related I think it is a good idea to put them together. How to programmatically create queries. I know I could start creating strings and get that string parsed with the query parser. But as I gather bits and pieces of information from other resources, there is a programattical way to do this. What are the syntax rules for the Lucene queries? --EDIT-- I'll give a requirement example for a query I would like to make: Say I have 5 fields: First Name Last Name Age Address Everything All fields are optional, the last field should search over all the other fields. I go over every field and see if it's IsNullOrEmpty(). If it's not, I would like to append a part of my query so it adds the relevant search part. First name and last name should be exact matches and have more weight then the other fields. Age is a string and should exact match. Address can varry in order. Everything can also varry in order. How should I go about this?

    Read the article

  • stackoverflow tags and related tags

    - by parminder
    Hi Experts, I am working on a website where a user can add tags to their posted books. It is similar to stackover flow, but I am keeping my tags in differnt table. so here are the tables/class in linq to entities. Books { bookId, Title } Tags { Id Tag } BooksTags { Id BookId TagId } Here are few sample records. Books BookId Title 113421 A 113422 B Tags Id Tag 1 ASP 2 C# 3 CSS 4 VB 5 VB.NET 6 PHP 7 java 8 pascal BooksTags Id BookId TagId 1 113421 1 2 113421 2 3 113421 3 4 113421 4 5 113422 1 6 113422 4 7 113422 8 Question 1 : I need to write something in linq to entities queries which gives me data according to the tags say if I want bookIds where tagid =1 it should return bookid 113421 and 113422 as it exists in both the books, but If I ask data for tags 1 and 2 it should return only book 113421 as that is the only book where both the tags are present. Question 2 : I need tags and their count too to show in related tags, so in first case my related tags class should have following result. RelatedTags Tag Count 2 1 3 1 4 2 8 1 in the second case when two tags are requested the result should be like RelatedTags Tag Count 3 1 4 1 I have get the first thing working by converting a sql query in linqer, but that seems like a hell. so want to know if there is any better idea. I have used dyanmic where clause to include two tags. So if someone can help. It will be much appreciated. Thanks Parminder

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • Broken console in Maven project using Netbeans

    - by Maciek Sawicki
    Hi, I have strange problem with my Neatens+Maven installation. This is the shortest code to reproduce the problem: public class App { public static void main( String[] args ) { // Create a scanner to read from keyboard Scanner scanner = new Scanner (System.in); Scanner s= new Scanner(System.in); String param= s.next(); System.out.println(param); } } When I'm running it as Maven Project inside Netbeans console seems to be broken. It just ignores my input. It's look like infinitive loop in System.out.println(param);. However this project works fine when it's compiled as "Java Aplication" project. It also works O.K. if I build and run it from cmd. System info: Os: Vista IDE: Netbeans 6.8 Maven: apache-maven-2.2.1 //edit Built program (using mavean from Netbeans) works fine. I just can't test it using Net beans. And I think I forgot to ask the question ;). So of course my first question is: how can I fix this problem? And second is: Is it any workaround for this? For example configuring Netbeans to run external commend line app instead of using built in console.

    Read the article

  • How are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Pros and Cons of using SqlCommand Prepare in C#?

    - by MadBoy
    When i was reading books to learn C# (might be some old Visual Studio 2005 books) I've encountered advice to always use SqlCommand.Prepare everytime I execute SQL call (whether its' a SELECT/UPDATE or INSERT on SQL SERVER 2005/2008) and I pass parameters to it. But is it really so? Should it be done every time? Or just sometimes? Does it matter whether it's one parameter being passed or five or twenty? What boost should it give if any? Would it be noticeable at all (I've been using SqlCommand.Prepare here and skipped it there and never had any problems or noticeable differences). For the sake of the question this is my usual code that I use, but this is more of a general question. public static decimal pobierzBenchmarkKolejny(string varPortfelID, DateTime data, decimal varBenchmarkPoprzedni, decimal varStopaOdniesienia) { const string preparedCommand = @"SELECT [dbo].[ufn_BenchmarkKolejny](@varPortfelID, @data, @varBenchmarkPoprzedni, @varStopaOdniesienia) AS 'Benchmark'"; using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetailsDZP)) //if (varConnection != null) { using (var sqlQuery = new SqlCommand(preparedCommand, varConnection)) { sqlQuery.Prepare(); sqlQuery.Parameters.AddWithValue("@varPortfelID", varPortfelID); sqlQuery.Parameters.AddWithValue("@varStopaOdniesienia", varStopaOdniesienia); sqlQuery.Parameters.AddWithValue("@data", data); sqlQuery.Parameters.AddWithValue("@varBenchmarkPoprzedni", varBenchmarkPoprzedni); using (var sqlQueryResult = sqlQuery.ExecuteReader()) if (sqlQueryResult != null) { while (sqlQueryResult.Read()) { //sqlQueryResult["Benchmark"]; } } } }

    Read the article

  • Comparing lists in Standard ML

    - by user1050640
    I am extremely new to SML and we just got out first programming assignment for class and I need a little insight. The question is: write an ML function, called minus: int list * int list -> int list, that takes two non-decreasing integer lists and produces a non-decreasing integer list obtained by removing the elements from the first input list which are also found in the second input list. For example, minus( [1,1,1,2,2], [1,1,2,3] ) = [1,2] minus( [1,1,2,3],[1,1,1,2,2] ) = [3] Here is my attempt at answering the question. Can anyone tell me what I am doing incorrectly? I don't quite understand parsing lists. fun minus(xs,nil) = [] | minus(nil,ys) = [] | minus(x::xs,y::ys) = if x=y then minus(xs,ys) else x :: minus(x,ys); Here is a fix I just did, I think this is right now? fun minus(L1,nil) = L1 | minus(nil,L2) = [] | minus(L1,L2) = if hd(L1) > hd(L2) then minus(L1,tl(L2)) else if hd(L1) = hd(L2) then minus(tl(L1),tl(L2)) else hd(L1) :: minus(tl(L1), L2);

    Read the article

  • Given a main function and a cleanup function, how (canonically) do I return an exit status in Bash/Linux?

    - by Zac B
    Context: I have a bash script (a wrapper for other scripts, really), that does the following pseudocode: do a main function if the main function returns: $returncode = $? #most recent return code if the main function runs longer than a timeout: kill the main function $returncode = 140 #the semi-canonical "exceeded allowed wall clock time" status run a cleanup function if the cleanup function returns an error: #nonzero return code exit $? #exit the program with the status returned from the cleanup function else #cleanup was successful .... Question: What should happen after the last line? If the cleanup function was successful, but the main function was not, should my program return 0 (for the successful cleanup), or $returncode, which contains the (possibly nonzero and unsuccessful) return code of the main function? For a specific application, the answer would be easy: "it depends on what you need the script for." However, this is more of a general/canonical question (and if this is the wrong place for it, kill it with fire): in Bash (or Linux in general) programming, do you typically want to return the status that "means" something (i.e. $returncode) or do you ignore such subjectivities and simply return the code of the most recent function? This isn't Bash-specific: if I have a standalone executable of any kind, how, canonically should it behave in these cases? Obviously, this is somewhat debatable. Even if there is a system for these things, I'm sure that a lot of people ignore it. All the same, I'd like to know. Cheers!

    Read the article

  • How To Create a Flexible Plug-In Architecture?

    - by justkt
    A repeating theme in my development work has been the use of or creation of an in-house plug-in architecture. I've seen it approached many ways - configuration files (XML, .conf, and so on), inheritance frameworks, database information, libraries, and others. In my experience: A database isn't a great place to store your configuration information, especially co-mingled with data Attempting this with an inheritance hierarchy requires knowledge about the plug-ins to be coded in, meaning the plug-in architecture isn't all that dynamic Configuration files work well for providing simple information, but can't handle more complex behaviors Libraries seem to work well, but the one-way dependencies have to be carefully created. As I seek to learn from the various architectures I've worked with, I'm also looking to the community for suggestions. How have you implemented a solid plug-in architecture? What was your worst failure (or the worst failure you've seen)? What would you do if you were going to implement a new plug-in architecture? What SDK or open source project that you've worked with has the best example of a good architecture? A few examples I've been finding on my own: Perl's Module::Plugable An SO question with a list for Java (including Sever Provider Interfaces) An SO question for C++ pointing to a Dr. Dobbs article These examples seem to play to various language strengths. Is a good plugin architecture necessarily tied to the language? Is it best to use tools to create a plugin architecture, or to do it on one's own following models?

    Read the article

  • using a 64-bit compiler in microsoft visual c++

    - by Ben
    this question is essentially identical to an earlier question i had that didn't receive any answers. hopefully someone can help me out this time. i am trying to compile a vc++ project as 64 bit using visual c++ express 2010. i know that the 64 bit compiler does not come with the default installation of vc++ express so i installed windows sdk for windows 7 as specified here (http://msdn.microsoft.com/en-us/library/9yb4317s.aspx) which includes the 64 bit compiler as i understand. however, there is still no 64 bit option in the configuration manager for vc++. after some searching i found and completed this tutorial (http://jenshuebel.wordpress.com/2009/02/12/visual-c-2008-express-edition-and-64-bit-targets/) as well as the various links at the bottom of this page. despite all my efforts, i still cannot get the 64 bit compiler to show in vc++ (i.e. the 64 bit compiler won't show under "active solutions platform" in the configuration manager). if anyone has any experience/tips with getting this to work i would really appreciate it. fyi - i am running windows 7(x64).

    Read the article

  • Which text margin does SWT Table use when drawing text?

    - by Zordid
    I got a relatively easy question - but I cannot find anything anywhere to answer it. I use a simple SWT table widget in my application that displays only text in the cells. I got an incremental search feature and want to highlight text snippets in all cells if they match. So when typing "a", all "a"s should be highlighted. To get this, I add an SWT.EraseItem listener to interfere with the background drawing. If the current cell's text contains the search string, I find the positions and calculate relative x-coordinates within the text using event.gc.stringExtent - easy. With that I just draw rectangles "behind" the occurrences. Now, there's a flaw in this. The table does not draw the text without a margin, so my x coordinate does not really match - it is slightly off by a few pixels! But how many?? Where do I retrieve the cell's text margins that table's own drawing will use? No clue. Cannot find anything. :-( Bonus question: the table's draw method also shortens text and adds "..." if it does not fit into the cell. Hmm. My occurrence finder takes the TableItem's text and thus also tries to mark occurrences that are actually not visible because they are consumed by the "...". How do I get the shortened text and not the "real" text within the EraseItem draw handler? Thanks!

    Read the article

  • Does CSS have a "start over" feature?

    - by Rick Wayne
    I'm using calendar_date_select (henceforth CDS) in a Rails application, and have a stupid question. When I embed the CDS component in the middle of an already-CSS-styled page, all manner of things go ugly-wrong with it (spacing, fonts, etc.). Clearly the elements inside the CDS have inherited unwanted stuff from the styles already working in the containing page. Now, I could use a combination of, say, Safari's CSS debugging and analyze what's wrong element-by-element. But that's (A) tedious, and (B) might load up my component's styles with tons of container-defeating special cases. If nothing else, I'm certain to change the containing page's styles in the future and would have to maintain the special cases. My question: Is is possible to have a DIV in a page that essentially backs out all the existing styling? Is there a simple one-liner that will do this? Failing that, can it be done on an element-by-element basis? E.g. I know what tags the CDS generates, so I could list each of them: { p: "#--NOTHING--#"; a: "#--NOTHING--#"; } where #--NOTHING--# is the Magic Turn Off All Inherited Styles incantation. http://code.google.com/p/calendardateselect/ Thanks, peeps.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >