Search Results

Search found 6619 results on 265 pages for 'digital logic'.

Page 219/265 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • How to handle not-enough-isolatedstorage issue deep in data loader?

    - by Edward Tanguay
    I have a silverlight application which loads data from many external data sources into IsolatedStorage, and while loading any of these sources if it does not have enough IsolatedStorage, it ends up in a catch statement. At that point in that catch statement I would like to ask the user to click a button to approve silverlight to increase the IsolatedStorage capacity. The problem is, although I have a "SwitchPage()" method with which I display a page, if I access it at this point it is too deep in the loading process and the application always goes into an endless loop, hangs and crashes. I need a way to branch out of the application completely somehow to an independent UserControl which has a button and code behind which does the increase logic. What is a solution for an application to be able to branch out of a loading process catch statement like this, display a user control which has a button to ask the user to increase the IsolatedStorage? public static void SaveBitmapImageToIsolatedStorageFile(OpenReadCompletedEventArgs e, string fileName) { try { using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Create, isf)) { Int64 imgLen = (Int64)e.Result.Length; byte[] b = new byte[imgLen]; e.Result.Read(b, 0, b.Length); isfs.Write(b, 0, b.Length); isfs.Flush(); isfs.Close(); isf.Dispose(); } } } catch (IsolatedStorageException) { //handle: present user with button to increase isolated storage } catch (TargetInvocationException) { //handle: not saved } }

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • JAX-RS --- How to return JSON and HTTP Status code together?

    - by masato-san
    I'm writing REST web app (Netbean6.9, JAX-RS, Toplink-essential) and trying to return JSON and Http status code. I have code ready and working just to return JSON when HTTP GET Method is called from client. Code snippet @Path("get/id") @GET @Produces("application/json") public M_?? getMachineToUpdate(@PathParam("id") String id) { //some code to return JSON . . return myJson But I also want to return HTTP status code (500, 200, 204 etc) along with returning JSON. I tried using HttpServletResponse object, response.sendError("error message", 500); But this made browser to think it's real 500 so output web page was regular Http 500 error page. What I want to is just to return status code so that my Javascript on client side can handle some logic depending on what HTTP status code is returned. (maybe just to display the error code and message on html page.) Is it possible to do so? or should HTTP status code not be used for such thing?

    Read the article

  • JDBC call not executing

    - by dbyrne
    I am working on one of the DAOs for a medium sized web application. Unfortunately, it contains very convoluted logic, and makes hundreds of JDBC stored proc calls in loops. This is out of my control. I am working on a method inside the DAO which makes a single JDBC call. The simplified version of what this method looks like is this: DriverManager.registerDriver(new com.sybase.jdbc2.jdbc.SybDriver()); Connection con = DriverManager.getConnection((String)connectionDetails.get("DATABASE_URL") (String)connectionDetails.get("USERID"), (String)connectionDetails.get("PASSWORD")); String sqlToExecute = "{call " + STORED_PROC + "(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"; CallableStatement stmt = con.prepareCall(sqlToExecute); //Maybe I should try calling clearParameters here? stmt.setString(1,someData); //....Set of parameters.... if (!stmt.execute()) { //execute method never returns false } stmt.close(); Its pretty much a textbook JDBC call. All this stored proc does is insert a single row. Here is where things get crazy: This code works when you run it through a debugger line by line, but fails when you run it "full speed". Not only does it fail, but it doesn't throw any exception! The execute method always returns true. It just breezes right through the JDBC call without inserting a row to the database. If you go through the log files, copy the stored proc call and run it manually, it works (just like it does in debug mode). Whats strange is that the rest of the DAO, with all its hundreds of looped stored proc calls, works fine. My thinking is that Connection or CallableStatement is caching some value behind the scenes that is screwing things up. Has anyone ever seen anything like this before? A JDBC call failing with no exceptions? I know it will be impossible to provide a complete solution to this without seeing the whole application, I am just looking for suggestions on possible issues to investigate.

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • template files in python

    - by saminny
    I am trying to use python for translating a set of templates to a set of configuration files based on values taken from a main configuration file. However, I am having certain issues. Consider the following example of a template file. file1.cfg.template %(CLIENT1)s %(HOST1)s %(PORT1)d C %(COMPID1)s %(CLIENT2)s %(HOST2)s %(PORT2)d C %(COMPID2)s This file contains an entry for each client. There are hundreds of config files like this and I don't want to have logic for each type of config file. Python should do the replacements and generate config files automatically given a set of global values read from a main xml config file. However, in the above example, if CLIENT2 does not exist, how do I delete that line? I expect Python would generate the config file using the following simple code: os.open("file1.cfg.template").read() & myhash where myhash is hash of all configuration parameters which may not contain CLIENT2 at all. In the case it does not contain CLIENT2, I want that line to disappear from the file. Is it possible to insert some 'IF' block in the file and have python evaluate it? Thanks for your help. Any suggestions most welcome.

    Read the article

  • Recursion in prepared statements

    - by Rob
    I've been using PDO and preparing all my statements primarily for security reasons. However, I have a part of my code that does execute the same statement many times with different parameters, and I thought this would be where the prepared statements really shine. But they actually break the code... The basic logic of the code is this. function someFunction($something) { global $pdo; $array = array(); static $handle = null; if (!$handle) { $handle = $pdo->prepare("A STATEMENT WITH :a_param"); } $handle->bindValue(":a_param", $something); if ($handle->execute()) { while ($row = $handle->fetch()) { $array[] = someFunction($row['blah']); } } return $array; } It looked fine to me, but it was missing out a lot of rows. Eventually I realised that the statement handle was being changed (executed with different param), which means the call to fetch in the while loop will only ever work once, then the function calls itself again, and the result set is changed. So I am wondering what's the best way of using PDO prepared statements in a recursive way. One way could be to use fetchAll(), but it says in the manual that has a substantial overhead. The whole point of this is to make it more efficient. The other thing I could do is not reuse a static handle, and instead make a new one every time. I believe that since the query string is the same, internally the MySQL driver will be using a prepared statement anyway, so there is just the small overhead of creating a new handle on each recursive call. Personally I think that defeats the point. Or is there some way of rewriting this?

    Read the article

  • Adding <tr> from repeater's ItemDataBound Event

    - by nemiss
    My repeater's templates generate a table, where each item is a table row. When a very very specific condition is met (itemdata), I want to add an additional row to the table from this event. How can I do that? protected void rptData_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { bool tmp = bool.Parse(DataBinder.Eval(e.Item.DataItem, "somedata").ToString()); if (!tmp && e.Item.ItemIndex != 0) { //Add row after this item } } } I can use e.Item.Controls.Add() and add TableRow but for that I need to locate a table right? How can I solve that? UPDATE I will explain now why I need this: I am creating a sort of message board, where data entries are displayed in a tabled style. The first items in the table are "important" items after those items, i want to add this row. I could solved it using 2 repeaters, where the first repeater will be bounded to pinned items, and the second repeater will be bounded to refular items. But I don't want to have to repeaters nor I want to complex the business logic for separating the fetched data to pinned and not-pinned collection. I think the best opton is to do it "onfly", using one repeater and one datasource.

    Read the article

  • structDelete doesn't affect the shallow copy?

    - by Travis
    I was playing around onError so I tried to create an error using a large xml document object. <cfset XMLByRef = variables.parsedXML.XMLRootElement.XMLChildElement> <cfset structDelete(variables.parsedXML, "XMLRootElement")> <cfset startXMLShortLoop = getTickCount()> <cfloop from = "1" to = "#arrayLen(variables.XMLByRef)#" index = "variables.i"> <cfoutput>#variables.XMLByRef[variables.i].id.xmltext#</cfoutput><br /> </cfloop> <cfset stopXMLShortLoop = getTickCount()> I expected to get an error because I deleted the structure I was referencing. From LiveDocs: Variable Assignment - Creates an additional reference, or alias, to the structure. Any change to the data using one variable name changes the structure that you access using the other variable name. This technique is useful when you want to add a local variable to another scope or otherwise change a variable's scope without deleting the variable from the original scope. instead I got 580df1de-3362-ca9b-b287-47795b6cdc17 25a00498-0f68-6f04-a981-56853c0844ed ... ... ... db49ed8a-0ba6-8644-124a-6d6ebda3aa52 57e57e28-e044-6119-afe2-aebffb549342 Looped 12805 times in 297 milliseconds <cfdump var = "#variables#"> Shows there's nothing in the structure, just parsedXML.xmlRoot.xmlName with the value of XMLRootElement. I also tried <cfset structDelete(variables.parsedXML.XMLRootElement, "XMLChildElement")> as well as structClear for both. More information on deleting from the xml document object. http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-78e3.html Can someone please explain my faulty logic? Thanks.

    Read the article

  • vectorizing loops in Matlab - performance issues

    - by Gacek
    This question is related to these two: http://stackoverflow.com/questions/2867901/introduction-to-vectorizing-in-matlab-any-good-tutorials http://stackoverflow.com/questions/2561617/filter-that-uses-elements-from-two-arrays-at-the-same-time Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time. I've rewritten this: function B = bfltGray(A,w,sigma_r) dim = size(A); B = zeros(dim); for i = 1:dim(1) for j = 1:dim(2) % Extract local region. iMin = max(i-w,1); iMax = min(i+w,dim(1)); jMin = max(j-w,1); jMax = min(j+w,dim(2)); I = A(iMin:iMax,jMin:jMax); % Compute Gaussian intensity weights. F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2); B(i,j) = sum(F(:).*I(:))/sum(F(:)); end end into this: function B = rngVect(A, w, sigma) W = 2*w+1; I = padarray(A, [w,w],'symmetric'); I = im2col(I, [W,W]); H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2); B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []); But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems. I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer. I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available). So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.

    Read the article

  • Which languages and techniques can I use to improve my coding practices?

    - by Danjah
    I've been offered the opportunity upskill through study, while at work which is great. My background I am mostly self-taught, but have worked with many excellent people over the years - both self-taught and fully educated, and on many decent projects. I have mild experience in Actionscript, I'm getting better every day with my Javascript, and my CSS is angled at best practice, but needs a bit of modernising. I'm a traditional interface developer, I'm not stupid and I like a challenge. My goal I need to start seeing ways of applying better logic, optimising code, refactoring, different styles of development (agile, others?), and.. well I need to try and start thinking like.. a more solid programmer. Its hard to describe, I have good solutions and I'm efficient - but I KNOW that there's a bunch I am missing. I am already employed with a solid career, but I feel the need to fill gaps. My question/s Are there a set of guiding principles you can recommend I focus on to improve the points above? Are there particular programming languages which I might focus on to get a broader overview? Do you think I should avoid particular styles of development, or even languages, while solidifying what might end up being part 'the basics' but hopefully 'advanced programming'? -- Sorry if this appears off topic or something but I figure you're probably some of the best people to ask.

    Read the article

  • Android Maven and Refresh Problem

    - by antonio Musella
    Hi all, i've a strange problem with maven and android I've 3 maven project and 2 normal java maven project divided in this manner : normal project : model project ... packaged as jar ... contains java Pojo Bean and Interface. Dao Project ... packaged as jar ... contains Db Logic - Depend on model Project Android Application Maven project ContentProvider ... packaged as apk ... contains ContentProviders only. Depends on Dao Project Editors ... packaged as apk ... contains only Editor, Depends on Dao project MainApp ... packaged as apk ... contains MyApp, Depends on DAO ... The Problem is that if i modify DAO Project , Then do a maven clean and maven install of all apk project, then run as Android Application within Eclipse, i don't see updated app on my Emulator. Nicely if i shut down my ubuntu workstation and restart it i can see The updated app on my Emulator. Do you know a solution for this issue ? thanks and regards

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Parsing custom time format with SimpleDateFormat

    - by ggrigery
    I'm having trouble parsing a date format that I'm getting back from an API and that I have never seen (I believe is a custom format). An example of a date: /Date(1353447000000+0000)/ When I first encountered this format it didn't take me long to see that it was the time in milliseconds with a time zone offset. I'm having trouble extracting this date using SimpleDateFormat though. Here was my first attempt: String weirdDate = "/Date(1353447000000+0000)/"; SimpleDateFormat sdf = new SimpleDateFormat("'/Date('SSSSSSSSSSSSSZ')/'"); Date d1 = sdf.parse(weirdDate); System.out.println(d1.toString()); System.out.println(d1.getTime()); System.out.println(); Date d2 = new Date(Long.parseLong("1353447000000")); System.out.println(d2.toString()); System.out.println(d2.getTime()); And output: Tue Jan 06 22:51:41 EST 1970 532301760 Tue Nov 20 16:30:00 EST 2012 1353447000000 The date (and number of milliseconds parsed) is not even close and I haven't been able to figure out why. After some troubleshooting, I discovered that the way I'm trying to use SDF is clearly flawed. Example: String weirdDate = "1353447000000"; SimpleDateFormat sdf = new SimpleDateFormat("S"); Date d1 = sdf.parse(weirdDate); System.out.println(d1.toString()); System.out.println(d1.getTime()); And output: Wed Jan 07 03:51:41 EST 1970 550301760 I can't say I've ever tried to use SDF in this way to just parse a time in milliseconds because I would normally use Long.parseLong() and just pass it straight into new Date(long) (and in fact the solution I have in place right now is just a regular expression and parsing a long). I'm looking for a cleaner solution that I can easily extract this time in milliseconds with the timezone and quickly parse out into a date without the messy manual handling. Anyone have any ideas or that can spot the errors in my logic above? Help is much appreciated.

    Read the article

  • Closures in Ruby

    - by Isaac Cambron
    I'm kind of new to Ruby and some of the closure logic has me a confused. Consider this code: array = [] for i in (1..5) array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] This makes sense to me because i is bound outside the loop, so the same variable is captured by each trip through the loop. It also makes sense to me that using an each block can fix this: array = [] (1..5).each{|i| array << lambda {i}} array.map{|f| f.call} => [1, 2, 3, 4, 5] ...because i is now being declared separately for each time through. But now I get lost: why can't I also fix it by introducing an intermediate variable? array = [] for i in 1..5 j = i array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] Because j is new each time through the loop, I'd think a different variable would be captured on each pass. For example, this is definitely how C# works, and how -- I think-- Lisp behaves with a let. But in Ruby not so much. It almost looks like = is aliasing the variable instead of copying the reference, but that's just speculation on my part. What's really happening?

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • Intermittent Issue Writing to Google Appengine Datastore

    - by user242153
    Hi, I have a functioning app and recently have had intermittent problems writing to the datastore. I did not make any relevant code changes, however in the last few days my attempts to write to the datastore sometimes work and sometimes don't. I am trying to save an object that is in a many to one relationship with an existing persisted parent. So, the logic works like this: 1) Parent pulled from the datastore 2) Child created / instantiated using constructor 3) Parent.addSingleChild(child); // the "addSingleChild" method just adds the object argument to the collection of children 4) child.setParent(Parent); // sets the Parent object to the parent field I am using transactions as explained in the documentation ending with "finally {if (tx.isActive()) {tx.rollback(); } }" When the servlet is called, the parent is called from the datastore and the child object is created and added to the many to one mapping to the pre-existing parent. The child should automatically be persisted, since the parent is already persistent, and the child is added to the collection of children that map to the parent. And it worked this way in the past. However, to be sure, i did add a pm.makePersistent(child). Doesn't seem to help, still have the intermittent problem. Any suggestions would be appreciated, and if you need to see the actual code I can post. Thanks

    Read the article

  • Maven: Repository no longer exists! Now what?

    - by pek
    I have just begun my journey in Maven2 and found the dependency repositories logic a little weird... As far as I understand, I need to point Maven to a repository from which it can fetch the various POMs found in my dependencies. In other words, instead of downloading all the dependencies in my lib folder, as I did in the Ant era, I now have to look into various Maven repositories and, hopefully, find what I need. OK, thanks to MVNBrowser things get a little easier. But! What if the Maven repository no longer exists? For example, I use Slick in my project. Among the other dependencies, slick uses JNLP (for some reason). The artifact for jnlp is: <dependency> <groupId>javax.jnlp</groupId> <artifactId>jnlp</artifactId> <version>1.2</version> </dependency> According to MVNBrowser, javax.jnlp can only be found in one repository, Freehep. Which is no longer available. So now what?

    Read the article

  • Inorder tree traversal in binary tree in C

    - by srk
    In the below code, I'am creating a binary tree using insert function and trying to display the inserted elements using inorder function which follows the logic of In-order traversal.When I run it, numbers are getting inserted but when I try the inorder function( input 3), the program continues for next input without displaying anything. I guess there might be a logical error.Please help me clear it. Thanks in advance... #include<stdio.h> #include<stdlib.h> int i; typedef struct ll { int data; struct ll *left; struct ll *right; } node; node *root1=NULL; // the root node void insert(node *root,int n) { if(root==NULL) //for the first(root) node { root=(node *)malloc(sizeof(node)); root->data=n; root->right=NULL; root->left=NULL; } else { if(n<(root->data)) { root->left=(node *)malloc(sizeof(node)); insert(root->left,n); } else if(n>(root->data)) { root->right=(node *)malloc(sizeof(node)); insert(root->right,n); } else { root->data=n; } } } void inorder(node *root) { if(root!=NULL) { inorder(root->left); printf("%d ",root->data); inorder(root->right); } } main() { int n,choice=1; while(choice!=0) { printf("Enter choice--- 1 for insert, 3 for inorder and 0 for exit\n"); scanf("%d",&choice); switch(choice) { case 1: printf("Enter number to be inserted\n"); scanf("%d",&n); insert(root1,n); break; case 3: inorder(root1); break; default: break; } } }

    Read the article

  • Is there a better way to write this LINQ query?

    - by Raj Aththanayake
    Hi Is there a better simplified way to write this query. My logic is if collection contains customer ids and countrycodes, do the query ordey by customer id ascending. If there are no contain id in CustIDs then do the order by customer name. Is there a better way to write this query? I'm not really familiar with complex lambdas. var custIdResult = (from Customer c in CustomerCollection where (c.CustomerID.ToLower().Contains(param.ToLower()) && (countryCodeFilters.Any(item => item.Equals(c.CountryCode))) ) select c).ToList(); if (custIdResult.Count > 0) { return from Customer c in custIdResult where ( c.CustomerName.ToLower().Contains(param.ToLower()) && countryCodeFilters.Any(item => item.Equals(c.CountryCode))) orderby c.CustomerID ascending select c; } else { return from Customer c in CustomerCollection where (c.CustomerName.ToLower().Contains(param.ToLower()) && countryCodeFilters.Any(item => item.Equals(c.CountryCode))) orderby c.CustomerName descending select c; }

    Read the article

  • When are predicates appropriate and what is the best pattern for usage

    - by Maxim Gershkovich
    When are predicates appropriate and what is the best pattern for usage? What are the advantages of predicates? It seems to me like most cases where a predicate can be employed a tight loop would accomplish the same functionality? I don’t see a reusability argument given you will probably only implement a predicate in one method right? They look and feel nice but besides that they seem like you would only employ them when you need a quick hack on the collection classes? UPDATE But why would you be rewriting the tight loop again and again? In my mind/code when it comes to collections I always end up with something like Class Person End Class Class PersonList Inherits List(Of Person) Function FindByName(Name) as Person tight loop.... End Function End Class @Ani By that same logic I could implement the method as such Class PersonList Inherits List(Of Person) Function FindByName(Name) as PersonList End Function Function FindByAge(Age) as PersonList End Function Function FindBySocialSecurityNumber(SocialSecurityNumber) as PersonList End Function End Class And call it as such Dim res as PersonList = MyList.FindByName("Max").FindByAge(25).FindBySocialSecurityNumber(1234) and the result along with the amount of code and its reusability is largely the same, no? I am not arguing just trying to understand.

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • Is there a default way to get hold of an internal property in jQueryUi widget?

    - by prodigitalson
    Im using an existing widget from the jquery-ui labs call selectmenu. It has callback options for the events close and open. The problem is i need in these event to animate a node that is part of the widget but not what its connected to. In order to do this i need access to this node. for example if i were to actually modify the widget code itself: // ... other methods/properties "open" : function(event){ // ... the original logic // this is my animation $(this.list).slideUp('slow'); // this is the orginal call to _trigger this._trigger('open', event, $this._uiHash()); }, // ... other methods/properties However when in the scope of the event handler i attach this is the orginal element i called the widget on. I need the widget instance or specifically the widget instance's list property. $('select#custom').selectmenu({ 'open': function(){ // in this scope `this` is an HTMLSelectElement not the ui widget } }); Whats the best way to go about getting the list property from the widget?

    Read the article

  • Linked List push()

    - by JKid314159
    The stack is initialized with a int MaxSize =3. Then I push one int onto the list. " Pushed:" is returned to the console. Program crashes here. I think my logic is flawed but unsure. Maybe an infinite loop or unmet condition? Thanks for your help. I'm trying to traverse the list to the last node in the second part of the full() method. I implemented this stack as array based so must implement this method full() as this method is inside of main class. while(!stacker.full()) { cout << "Enter number = "; cin >> intIn; stacker.push(intIn); cout << "Pushed: " << intIn << endl; }//while Call to LinkListStack.cpp to class LinkList full(). int LinkList::full() { if(head == NULL) { top = 0; } else { LinkNode * tmp1; LinkNode * tmp2; tmp1 = head; while(top != MaxSize) { if(tmp1->next != NULL){ tmp2 = tmp1->next; tmp1 = tmp2; ++top; }//if }//while }//else return (top + 1 == MaxSize); }

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >