Search Results

Search found 10751 results on 431 pages for 'fast forward'.

Page 397/431 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Implementing Object Oriented: ansi-C approach

    - by No Money
    Hey there, I am an Intermediate programmer in Java and know some of the basics in C++. I recently started to scam over "C language" [please note that i emphasized on C language and want to stick with C as i found it to be a perfect tool, so no need for suggestions focusing on why should i move back to C++ or Java or any other crappy language (e.g: C#)]. Moving on, I code an Object Oriented approach in C but kindda scramble with the pointers part. Please understand that I am just a noob trying to extend my knowledge beyond what i learned in High School. Here is my code..... #include <stdio.h> typedef struct per{ int privateint; char *privateString; struct per (*New) (); void (*deleteperOBJ) (struct t_person *); void (*setperNumber) ((struct*) t_person,int); void (*setperString) ((struct*) t_person,char *); void (*dumpperState) ((struct*) t_person); }t_person; void setperNumber(t_person *const per,int num){ if(per==NULL) return; per->privateint=num; } void setperString(t_person *const per,char *string){ if(per==NULL) return; per->privateString=string; } void dumpperState(t_person *const per){ if(per==NULL) return; printf("value of private int==%d\n", per->privateint); printf("value of private string==%s\n", per->privateString); } void deleteperOBJ(struct t_person *const per){ free((void*)t_person->per); t_person ->per = NULL; } main(){ t_person *const per = (struct*) malloc(sizeof(t_person)); per = t_person -> struct per -> New(); per -> setperNumber (t_person *per, 123); per -> setperString(t_person *per, "No money"); dumpperState(t_person *per); deleteperOBJ(t_person *per); } Just to warn you, this program has several errors and since I am a beginner I couldn't help except to post this thread as a question. I am looking forward for assistance. Thanks in advance.

    Read the article

  • Generate A Simple Read-Only DAL?

    - by David
    I've been looking around for a simple solution to this, trying my best to lean towards something like NHibernate, but so far everything I've found seems to be trying to solve a slightly different problem. Here's what I'm looking at in my current project: We have an IBM iSeries database as a primary repository for a third party software suite used for our core business (a financial institution). Part of what my team does is write applications that report on or key off of a lot of this data in some way. In the past, we've been manually creating ADO .NET connections (we're using .NET 3.5 and Visual Studio 2008, by the way) and manually writing queries, etc. Moving forward, I'd like to simplify the process of getting data from there for the development team. Rather than creating connections and queries and all that each time, I'd much rather a developer be able to simply do something like this: var something = (from t in TableName select t); And, ideally, they'd just get some IQueryable or IEnumerable of generated entities. This would be done inside a new domain core that I'm building where these entities would live and the applications would interface with it through a request/response service layer. A few things to note are: The entities that correspond to the database tables should be generated once and we'd prefer to manually keep them updated over time. That is, if columns/tables are added to the database then we shouldn't have to do anything. (If some are deleted, of course, it will break, but that's fine.) But if we need to use a new column, we should be able to just add it to the necessary class(es) without having to re-gen the whole thing. The whole thing should be SELECT-only. We're not doing a full DAL here because we don't want to be able to break anything in the database (even accidentally). We don't need any kind of mapping between our domain objects and the generated entity types. The domain barely covers a fraction of the data that's in there, most of it we'll never need, and we would rather just create re-usable maps manually over time. I already have a logical separation for the DAL where my "repository" classes return domain objects, I'm just looking for a better alternative to manual ADO to be used inside the repository classes. Any suggestions? It seems like what I'm doing is just enough outside the normal demand for DAL/ORM tools/tutorials online that I haven't been able to find anything. Or maybe I'm just overlooking something obvious?

    Read the article

  • How does System.TraceListener prepend message with process name?

    - by btlog
    I have been looking at using System.Diagnostics.Trace for doing logging is a very basic app. Generally it does all I need it to do. The downside is that if I call Trace.TraceInformation("Some info"); The output is "SomeApp.Exe Information: 0: Some info". Initally this entertained me but no longer. I would like to just output "Some info" to console. So I thought writing a cusom TraceListener, rather than using the inbuilt ConsoleTraceListener, would solve the problem. I can see a specific format that I want all the text after the second colon. Here is my attempt to see if this would work. class LogTraceListener : TraceListener { public override void Write(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.Write(message); } public override void WriteLine(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.WriteLine(message); } } If I output the value of firstColon it is always -1. If I put a break point the message is always just "Some info". Where does all the other information come from? So I had a look at the call stack at the point just before Console.WriteLine was called. The method that called my WriteLine method is: System.dll!System.Diagnostics.TraceListener.TraceEvent(System.Diagnostics.TraceEventCache eventCache, string source, System.Diagnostics.TraceEventType eventType, int id, string message) + 0x33 bytes When I use Reflector to look at this message it all seems pretty straight forward. I can't see any code that changes the value of the string after I have sent it to Console.WriteLine. The only method that could posibly change the underlying string value is a call to UnsafeNativeMethods.EventWriteString which has a parameter that is a pointer to the message. Does anyone understand what is going on here and whether I can change the output to be just my message with out the additional fluff. It seems like evil magic that I can pass a string "Some info" to Console.WriteLine (or any other method for that matter) and the string that output is different.

    Read the article

  • Translate parse_git_branch function to zsh from bash (for prompt)

    - by yar
    I am using this function in Bash function parse_git_branch { git_status="$(git status 2> /dev/null)" pattern="^# On branch ([^${IFS}]*)" if [[ ! ${git_status}} =~ "working directory clean" ]]; then state="*" fi # add an else if or two here if you want to get more specific if [[ ${git_status} =~ ${pattern} ]]; then branch=${BASH_REMATCH[1]} echo "(${branch}${state})" fi } but I'm determined to use zsh. While I can use this perfectly as a shell script (even without a shebang) in my .zshrc the error is a parse error on this line if [[ ! ${git_status}}... What do I need to do to get it ready for zshell? Edit: The "actual error" I'm getting is " parse error near } and it refers to the line with the strange double }}, which works on Bash. Edit: Here's the final code, just for fun: parse_git_branch() { git_status="$(git status 2> /dev/null)" pattern="^# On branch ([^[:space:]]*)" if [[ ! ${git_status} =~ "working directory clean" ]]; then state="*" fi if [[ ${git_status} =~ ${pattern} ]]; then branch=${match[1]} echo "(${branch}${state})" fi } setopt PROMPT_SUBST PROMPT='$PR_GREEN%n@$PR_GREEN%m%u$PR_NO_COLOR:$PR_BLUE%2c$PR_NO_COLOR%(!.#.$)' RPROMPT='$PR_GREEN$(parse_git_branch)$PR_NO_COLOR' Thanks to everybody for your patience and help. Edit: The best answer has schooled us all: git status is porcelain (UI). Good scripting goes against GIT plumbing. Here's the final function: parse_git_branch() { in_wd="$(git rev-parse --is-inside-work-tree 2>/dev/null)" || return test "$in_wd" = true || return state='' git diff-index HEAD --quiet 2>/dev/null || state='*' branch="$(git symbolic-ref HEAD 2>/dev/null)" test -z "$branch" && branch='<detached-HEAD>' echo "(${branch#refs/heads/}${state})" } PROMPT='$PR_GREEN%n@$PR_GREEN%m%u$PR_NO_COLOR:$PR_BLUE%2c$PR_NO_COLOR%(!.#.$)' RPROMPT='$PR_GREEN$(parse_git_branch)$PR_NO_COLOR' Note that only the prompt is zsh-specific. In Bash it would be your prompt plus "\$(parse_git_branch)". This might be slower (more calls to GIT, but that's an empirical question) but it won't be broken by changes in GIT (they don't change the plumbing). And that is very important for a good script moving forward. Days Later: Ugh, it turns out that diff-index HEAD is NOT the same as checking status against working directory clean. So will this mean another plumbing call? I surely don't have time/expertise to write my own porcelain....

    Read the article

  • Primary language - QtC++, C#, Java?

    - by Airjoe
    I'm looking for some input, but let me start with a bit of background (for tl;dr skip to end). I'm an IT major with a concentration in networking. While I'm not a CS major nor do I want to program as a vocation, I do consider myself a programmer and do pretty well with the concepts involved. I've been programming since about 6th grade, started out with a proprietary game creation language that made my transition into C++ at college pretty easy. I like to make programs for myself and friends, and have been paid to program for local businesses. A bit about that- I wrote some programs for a couple local businesses in my senior year in high school. I wrote management systems for local shops (inventory, phone/pos orders, timeclock, customer info, and more stuff I can't remember). It definitely turned out to be over my head, as I had never had any formal programming education. It was a great learning experience, but damn was it crappy code. Oh yeah, by the way, it was all vb6. So, I've used vb6 pretty extensively, I've used c++ in my classes (intro to programming up to algorithms), used Java a little bit in another class (had to write a ping client program, pretty easy) and used Java for some simple Project Euler problems to help learn syntax and such when writing the program for the class. I've also used C# a bit for my own simple personal projects (simple programs, one which would just generate an HTTP request on a list of websites and notify if one responded unexpectedly or not at all, and another which just held a list of things to do and periodically reminded me to do them), things I would've written in vb6 a year or two ago. I've just started using Qt C++ for some undergrad research I'm working on. Now I've had some formal education, I [think I] understand organization in programming a lot better (I didn't even use classes in my vb6 programs where I really should have), how it's important to structure code, split into functions where appropriate, document properly, efficiency both in memory and speed, dynamic and modular programming etc. I was looking for some input on which language to pick up as my "primary". As I'm not a "real programmer", it will be mostly hobby projects, but will include some 'real' projects I'm sure. From my perspective: QtC++ and Java are cross platform, which is cool. Java and C# run in a virtual machine, but I'm not sure if that's a big deal (something extra to distribute, possibly a bit slower? I think Qt would require additional distributables too, right?). I don't really know too much more than this, so I appreciate any help, thanks! TL;DR Am an avocational programmer looking for a language, want quick and straight forward development, liked vb6, will be working with database driven GUI apps- should I go with QtC++, Java, C#, or perhaps something else?

    Read the article

  • circles and triangles problem

    - by Faken
    Hello everyone, I have an interesting problem here I've been trying to solve for the last little while: I have 3 circles on a 2D xy plane, each with the same known radius. I know the coordinates of each of the three centers (they are arbitrary and can be anywhere). What is the largest triangle that can be drawn such that each vertice of the triangle sits on a separate circle, what are the coordinates of those verticies? I've been looking at this problem for hours and asked a bunch of people but so far only one person has been able to suggest a plausible solution (though i have no way of proving it). The solution that we have come up with involves first creating a triangle about the three circle centers. Next we look at each circle individually and calculate the equation of a line that passes through the circle's center and is perpendicular to the opposite edge. We then calculate two intersection points of the circle. This is then done for the next two circles with a result of 6 points. We iterate over the 8 possible 3 point triangles that these 6 points create (the restriction is that each point of the big triangle must be on a separate circle) and find the maximum size. The results look reasonable (at least when drawn out on paper) and it passes the special case of when the centers of the circles all fall on a straight line (gives a known largest triangle). Unfortunate i have no way of proving this is correct or not. I'm wondering if anyone has encountered a problem similar to this and if so, how did you solve it? Note: I understand that this is mostly a math question and not programming, however it is going to be implemented in code and it must be optimized to run very fast and efficient. In fact, I already have the above solution in code and tested to be working, if you would like to take a look, please let me know, i chose not to post it because its all in vector form and pretty much impossible to figure out exactly what is going on (because it's been condensed to be more efficient). Lastly, yes this is for school work, though it is NOT a homework question/assignment/project. It's part of my graduate thesis (abet a very very small part, but still technically is part of it). Thanks for your help.

    Read the article

  • Spinning a circle in J2ME using a Canvas.

    - by JohnQPublic
    Hello all! I have a problem where I need to make a multi-colored wheel spin using a Canvas in J2ME. What I need to do is have the user increase the speed of the spin or slow the spin of the wheel. I have it mostly worked out (I think) but can't think of a way for the wheel to spin without causing my cellphone to crash. Here is what I have so far, it's close but not exactly what I need. class MyCanvas extends Canvas{ //wedgeOne/Two/Three define where this particular section of circle begins to be drawn from int wedgeOne; int wedgeTwo; int wedgeThree; int spinSpeed; MyCanvas(){ wedgeOne = 0; wedgeTwo = 120; wedgeThree = 240; spinSpeed = 0; } //Using the paint method to public void paint(Graphics g){ //Redraw the circle with the current wedge series. g.setColor(255,0,0); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeOne, 120); g.setColor(0,255,0); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeTwo, 120); g.setColor(0,0,255); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeThree, 120); } protected void keyPressed(int keyCode){ switch (keyCode){ //When the 6 button is pressed, the wheel spins forward 5 degrees. case KEY_NUM6: wedgeOne += 5; wedgeTwo += 5; wedgeThree += 5; repaint(); break; //When the 4 button is pressed, the wheel spins backwards 5 degrees. case KEY_NUM4: wedgeOne -= 5; wedgeTwo -= 5; wedgeThree -= 5; repaint(); } } I have tried using a redraw() method that adds the spinSpeed to each of the wedge values while(spinSpeed0) and calls the repaint() method after the addition, but it causes a crash and lockup (I assume due to an infinite loop). Does anyone have any tips or ideas how I could automate the spin so you do not have the press the button every time you want it to spin? (P.S - I have been lurking for a while, but this is my first post. If it's too general or asking for too much info (sorry if it is) and I either remove it or fix it. Thank you!)

    Read the article

  • Twisted + SQLAlchemy and the best way to do it.

    - by Khorkrak
    So I'm writing yet another Twisted based daemon. It'll have an xmlrpc interface as usual so I can easily communicate with it and have other processes interchange data with it as needed. This daemon needs to access a database. We've been using SQL Alchemy in place of hard coding SQL strings for our latest projects - those mostly done for web apps in Pylons. We'd like to do the same for this app and re-use library code that makes use of SQL Alchemy. So what to do? Well of course since that library was written for use in a Pylons app it's all the straight-forward blocking style code that everyone is accustomed to and all of the non-blocking is magically handled by Pylons via threading, thread locals, scoped sessions and so on. So now for Twisted I guess I'm a bit stuck. I could: Just write the sql I need directly if it's minimal and use the dbapi pool in twisted to do runInteractions etc when I need to hit the db. Use the objects and inherently blocking methods in our library and block now and then in my Twisted daemon. Bah. Use sAsync which was last updated in 2008 and kind of reuse the models we have defined already but not really and it does address code that needs to work in Pylons either. Does that even work with the latest version SQL Alchemy? Who knows. That project looked great though - why was it apparently abandoned? Spawn a separate subprocess and have it deal with the library code and all it's blocking, the results being returned back to my daemon when ready as objects marshalled via YAML over xmlrpc. Use deferToThread and then expunge the objects returned having made sure to do eager loads so that I have all my stuff that I might need. Seems kind of ugha to me. I'm also stuck using Python 2.5.4 atm so no 2.6 yet and I don't think I can just do an import from future to get access to the cool new multiprocessing module stuff in there. That's OK though I guess as we've got dealing with interprocess communication down pretty well. So I'm leaning towards option 4 mostly as that would avoid the mortal sin of logic duplication with option 1 while also staying the heck away from threads. Any better ideas?

    Read the article

  • Having trouble understanding some code (Ruby on Rails)

    - by user284194
    I posted a question awhile ago asking how I could limit the rate at which a form could be submitted from a rails application. I was helped by a very patient user and their solution works great. The code was for my comments controller, and now I find myself wanting to add this functionality to another controller, my Messages controller. I immediately tried reusing the working code from the comments controller but I couldn't get it to work. Instead of asking for the working code, could someone please help me understand my working comment controller code? class CommentsController < ApplicationController #... before_filter :post_check def record_post_time cookies[:last_post_at] = Time.now.to_i end def last_post_time Time.at((cookies[:last_post_at].to_i rescue 0)) end MIN_POST_TIME = 2.minutes def post_check return true if (Time.now - last_post_time) > MIN_POST_TIME flash[:warning] = "You are trying to reply too fast." @message = Message.find(params[:message_id]) redirect_to(@message) return false end #... def create @message = Message.find(params[:message_id]) @comment = @message.comments.build(params[:comment]) if @comment.save record_post_time flash[:notice] = "Replied to \"#{@message.title}\"" redirect_to(@message) else render :action => "new" end end def update @message = Message.find(params[:message_id]) @comment = Comment.find(params[:id]) if @comment.update_attributes(params[:comment]) record_post_time redirect_to post_comment_url(@message, @comment) else render :action => "edit" end end #... end My Messages controller is pretty much a standard rails generated controller with a few before filters and associated private methods for DRYing up the code and a redirect for non existent pages. I'll explain how much of the code I understand. When a comment is created, a cookie is created with a last_post_time value. If they try to post another comment, the cookie is checked if the last one was made in the last two minutes. If it was a flash warning is displayed and no comment is recorded. What I don't really understand is how the post_check method works and how I can adapt it for my simpler posts controller. I thought I could reuse all the code in the message controller with the exception of the line: @message = Message.find(params[:message_id]) # (don't need the redirect code) in the post_check method. But it trips up on the "record_post_time" in the create action/method. I really want to understand this. Can someone explain why this doesn't work? I greatly appreciate you reading my lengthy question.

    Read the article

  • how to set footer image in this video screen?

    - by bala
    hi my problem is how to create footer image in this video screen while playing.... how to create this format. now i am give my description: • 1) Header image, a stretched background image. The location of this external image comes from the application xml; • 2) Footer image, a stretched background image. The location of this external image comes from the application xml; 2.a) the copyright, disclaimer and buy block, this block contains links to popup windows that contain a copyright and or disclaimer. And an option to buy the application for the advertisement less version. The content of this block is fed trough the application XML feed. The color of the text is fed by the application xml plus the popup links and texts itself; • 3) Carousel image, a stretched background image. The location of this external image comes from the application xml; 3.a) the carousel contains objects that can flow from right to left, possibly trough a animation (a soft break of the slide). The first object is centered in the middle of the carousel. This is the first element in the video feed. All the subsequent video object are added to the right of the centered object; 4) Total Video object, this object links to window two with the corresponding video of this object. This object is visually build out of the following sub parts:o 4.a) Thumb object (possible playing video thumb); 4.b) Reflection of the Thumb; 4.c) Textual Explanation of Thumb. 1) Video stream, this is the video stream coming from a external server streamed to the television (maybe up scaled) as 720p stream; 2) Advertisement, the type of advertisement shown overlaid on the video is based on previous settings in the video feed. This could mean that Admob, Adsense or a third party image plus URL could be shown. When the advertisement is selected trough navigation (it will highlight in a different color as a border around the advertisement. The color an thickness can be managed trough the application xml), when clicked a browser will open with the associated site (the application will be pushed to the background process, when the user is finished it will return to the app); 3) A back button, an image and navigational element. The location of the image comes from the application xml. The button is only shown when a cursor is moved (a button is pressed on the remote) it will highlight when selected and when pressed will forward the screen to the main window. When the main window is opened the video will be removed from cache and memory and cannot be start from the point it was exited. please give me your idea....

    Read the article

  • Java template classes using generator or similar?

    - by Hugh Perkins
    Is there some library or generator that I can use to generate multiple templated java classes from a single template? Obviously Java does have a generics implementation itself, but since it uses type-erasure, there are lots of situations where it is less than adequate. For example, if I want to make a self-growing array like this: class EasyArray { T[] backingarray; } (where T is a primitive type), then this isn't possible. This is true for anything which needs an array, for example high-performance templated matrix and vector classes. It should probably be possible to write a code generator which takes a templated class and generates multiple instantiations, for different types, eg for 'double' and 'float' and 'int' and 'String'. Is there something that already exists that does this? Edit: note that using an array of Object is not what I'm looking for, since it's no longer an array of primitives. An array of primitives is very fast, and uses only as much space a sizeof(primitive) * length-of-array. An array of object is an array of pointers/references, that points to Double objects, or similar, which could be scattered all over the place in memory, require garbage collection, allocation, and imply a double-indirection for access. Edit2: good god, voted down for asking for something that probably doesn't currently exist, but is technically possible and feasible? Does that mean that people looking for ways to improve things have already left the java community? Edit3: Here is code to show the difference in performance between primitive and boxed arrays: int N = 10*1000*1000; double[]primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } Object[] objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } tic(); primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } toc(); tic(); objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } toc(); Results: double[] array: 148 ms Double[] array: 4614 ms Not even close!

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Haskell Add Function Return to List Until Certain Length

    - by kienjakenobi
    I want to write a function which takes a list and constructs a subset of that list of a certain length based on the output of a function. If I were simply interested in the first 50 elements of the sorted list xs, then I would use fst (splitAt 50 (sort xs)). However, the problem is that elements in my list rely on other elements in the same list. If I choose element p, then I MUST also choose elements q and r, even if they are not in the first 50 elements of my list. I am using a function finderFunc which takes an element a from the list xs and returns a list with the element a and all of its required elements. finderFunc works fine. Now, the challenge is to write a function which builds a list whose total length is 50 based on multiple outputs of finderFunc. Here is my attempt at this: finish :: [a] -> [a] -> [a] --This is the base case, which adds nothing to the final list finish [] fs = [] --The function is recursive, so the fs variable is necessary so that finish -- can forward the incomplete list to itself. finish ps fs -- If the final list fs is too small, add elements to it | length fs < 50 && length (fs ++ newrs) <= 50 = fs ++ finish newps newrs -- If the length is met, then add nothing to the list and quit | length fs >= 50 = finish [] fs -- These guard statements are currently lacking, not the main problem | otherwise = finish [] fs where --Sort the candidate list sortedps = sort ps --(finderFunc a) returns a list of type [a] containing a and all the -- elements which are required to go with it. This is the interesting -- bit. rs is also a subset of the candidate list ps. rs = finderFunc (head sortedps) --Remove those elements which are already in the final list, because -- there can be overlap newrs = filter (`notElem` fs) rs --Remove the elements we will add to the list from the new list -- of candidates newps = filter (`notElem` rs) ps I realize that the above if statements will, in some cases, not give me a list of exactly 50 elements. This is not the main problem, right now. The problem is that my function finish does not work at all as I would expect it to. Not only does it produce duplicate elements in the output list, but it sometimes goes far above the total number of elements I want to have in the list. The way this is written, I usually call it with an empty list, such as: finish xs [], so that the list it builds on starts as an empty list.

    Read the article

  • Undefined template methods trick ?

    - by Matthieu M.
    A colleague of mine told me about a little piece of design he has used with his team that sent my mind boiling. It's a kind of traits class that they can specialize in an extremely decoupled way. I've had a hard time understanding how it could possibly work, and I am still unsure of the idea I have, so I thought I would ask for help here. We are talking g++ here, specifically the versions 3.4.2 and 4.3.2 (it seems to work with both). The idea is quite simple: 1- Define the interface // interface.h template <class T> struct Interface { void foo(); // the method is not implemented, it could not work if it was }; // // I do not think it is necessary // but they prefer free-standing methods with templates // because of the automatic argument deduction // template <class T> void foo(Interface<T>& interface) { interface.foo(); } 2- Define a class, and in the source file specialize the interface for this class (defining its methods) // special.h class Special {}; // special.cpp #include "interface.h" #include "special.h" // // Note that this specialization is not visible outside of this translation unit // template <> struct Interface<Special> { void foo() { std::cout << "Special" << std::endl; } }; 3- To use, it's simple too: // main.cpp #include "interface.h" class Special; // yes, it only costs a forward declaration // which helps much in term of dependencies int main(int argc, char* argv[]) { Interface<Special> special; foo(special); return 0; }; It's an undefined symbol if no translation unit defined a specialization of Interface for Special. Now, I would have thought this would require the export keyword, which to my knowledge has never been implemented in g++ (and only implemented once in a C++ compiler, with its authors advising anyone not to, given the time and effort it took them). I suspect it's got something to do with the linker resolving the templates methods... Do you have ever met anything like this before ? Does it conform to the standard or do you think it's a fortunate coincidence it works ? I must admit I am quite puzzled by the construct...

    Read the article

  • Suggest an alternative way to organize/build a database solution.

    - by Hamish Grubijan
    We are using Visual Studio 2010, but this was first conceived with VS2003. I will forward the best suggestions to my team. The current setup almost makes me vomit. It is a C# solution with most projects containing .sql files. Because we support Microsoft, Oracle, and Sybase, and so home-brewed a pre-processor, much like C preprocessor, except that substitutions are performed by a home-brewed C# program without using yacc and tools like that. #ifdefs are used for conditional macro definitions, and yeah - macros are the way this is done. A macro can expand to another macro or two, but this should eventually terminate. Only macros have #ifdef in them - the rest of the SQL-like code just uses these macros. Now, the various configurations: Debug, MNDebug, MNRelease, Release, SQL_APPLY_ALL, SQL_APPLY_MSFT, SQL_APPLY_ORACLE, SQL_APPLY_SYBASE, SQL_BUILD_OUTPUT_ALL, SQL_COMPILE, as well as 2 more. Also: Any CPU, Mixed Platforms, Win32. What drives me nuts is having to configure it correctly as well as choosing the right one out of 12 x 3 = 36 configurations as well as having to substitute database name depending on the type of database: config, main, or gateway. I am thinking that configuration should be reduced to just Debug, Release, and SQL_APPLY. Also, using 0, 1, and 2 seems so 80s ... Finally, I think my intention to build or not to build 3 types of databases for 3 types of vendors should be configured with just a tic tac toe board like: XOX OOX XXX In this case it would mean build MSFT+CONFIG, all SYBASE, and all GATEWAY. Still, the overall thing which uses a text file and a pre-processor and many configurations seems incredibly clunky. It is year 2010 now and someone out there is bound to have a very clean and/or creative tool/solution. The only pro would be that the existing collection of macros has been well tested. Have you ever had to write SQL that would work for several vendors? How did you do it? SqlVars.txt (Every one of 30 users makes a copy of a template and modifies this to suit their needs): // This is the default parameters file and should not be changed. // You can overwrite any of these parameters by copying the appropriate // section to override into SqlVars.txt and providing your own information. //Build types are 0-Config, 1-Main, 2-Gateway BUILD_TYPE=1 REMOVE_COMMENTS=1 // Login information used when applying to a Microsoft SQL server database SQL_APPLY_MSFT_version=SQL2005 SQL_APPLY_MSFT_database=msftdb SQL_APPLY_MSFT_server=ABC SQL_APPLY_MSFT_user=msftusr SQL_APPLY_MSFT_password=msftpwd // Login information used when applying to an Oracle database SQL_APPLY_ORACLE_version=ORACLE10g SQL_APPLY_ORACLE_server=oradb SQL_APPLY_ORACLE_user=orausr SQL_APPLY_ORACLE_password=orapwd // Login information used when applying to a Sybase database SQL_APPLY_SYBASE_version=SYBASE125 SQL_APPLY_SYBASE_database=sybdb SQL_APPLY_SYBASE_server=sybdb SQL_APPLY_SYBASE_user=sybusr SQL_APPLY_SYBASE_password=sybpwd ... (THIS GOES ON)

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • prolog sets problem, stack overflow

    - by garm0nboz1a
    Hi. I'm gonna show some code and ask, what could be optimized and where am I sucked? sublist([], []). sublist([H | Tail1], [H | Tail2]) :- sublist(Tail1, Tail2). sublist(H, [_ | Tail]) :- sublist(H, Tail). less(X, X, _). less(X, Z, RelationList) :- member([X,Z], RelationList). less(X, Z, RelationList) :- member([X,Y], RelationList), less(Y, Z, RelationList), \+less(Z, X, RelationList). lessList(X, LessList, RelationList) :- findall(Y, less(X, Y, RelationList), List), list_to_set(List, L), sort(L, LessList), !. list_mltpl(List1, List2, List) :- findall(X, ( member(X, List1), member(X, List2)), List). chain([_], _). chain([H,T | Tail], RelationList) :- less(H, T, RelationList), chain([T|Tail], RelationList), !. have_inf(X1, X2, RelationList) :- lessList(X1, X1_cone, RelationList), lessList(X2, X2_cone, RelationList), list_mltpl(X1_cone, X2_cone, Cone), chain(Cone, RelationList), !. relations(List, E) :- findall([X1,X2], (member(X1, E), member(X2, E), X1 =\= X2), Relations), sublist(List, Relations). semilattice(List, E) :- forall( (member(X1, E), member(X2, E), X1 < X2), have_inf(X1, X2, List) ). main(E) :- relations(X, E), semilattice(X, E). I'm trying to model all possible graph sets of N elements. Predicate relations(List, E) connects list of possible graphs(List) and input set E. Then I'm describing semilattice predicate to check relations' List for some properties. So, what I have. 1) semilattice/2 is working fast and clear ?- semilattice([[1,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). true. ?- semilattice([[1,3],[1,4],[2,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). false. 2) relations/2 is working not well ?- findall(X, relations(X,[1,2,3,4]), List), length(List, Len), writeln(Len),fail. 4096 false. ?- findall(X, relations(X,[1,2,3,4,5]), List), length(List, Len), writeln(Len),fail. ERROR: Out of global stack ^ Exception: (11) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G263, user:relations(_G263, [1, 2, 3, 4|...]), 17852886, _G268, []), _G835, '$bags':'$destroy_findall_bag'(17852886)) ? abort % Execution Aborted 3) Mix of them to finding all possible semilattice does not work at all. ?- main([1,2]). ERROR: Out of local stack ^ Exception: (15) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G41, user:less(1, _G41, [[1, 2], [2, 1]]), 17852886, _G52, []), _G4767764, '$bags':'$destroy_findall_bag'(17852886)) ?

    Read the article

  • JPanel's child components paint/layout problem

    - by Tom Brito
    I'm having a problem that when my frame is shown (after a login dialog) the buttons are not on correct position, then in some miliseconds they go to the right position (the center of the panel with border layout). When I make a SSCCE, it works correct, but when I run my whole code I have this fast-miliseconds delay to the buttons to go to the correct place. Unfortunately, I can't post the whole code, but the method that shows the frame is: public void login(JComponent userView) { centerPanel.removeAll(); centerPanel.add(userView); centerPanel.revalidate(); centerPanel.repaint(); frame.setVisible(true); } What would cause this delay to the panel layout? (I'm running everything in the EDT) -- update In my machine, this SSCCE shows the layout problem in 2 of 10 times I run it: import java.awt.BorderLayout; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.SwingUtilities; public class TEST { public static void main(String[] args) throws Exception { SwingUtilities.invokeAndWait(new Runnable() { @Override public void run() { System.out.println("Debug test..."); JPanel btnPnl = new JPanel(); btnPnl.add(new JButton("TEST")); JFrame f = new JFrame("TEST"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.getContentPane().setLayout(new BorderLayout()); f.getContentPane().add(btnPnl); f.pack(); f.setSize(800, 600); f.setVisible(true); System.out.println("End debug test!"); } }); } } The button first appers in the up-left, and then it goes to the center. Please, note that I'm understand, not just correct. Is it a java bug? --update OK, so the SSCCE don't show the problem with you that tried till now. Maybe it's my computer performance problem. But this don't answer the question, I still think Java Swing is creating new threads for make the layout behind the scenes.

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeffrey
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • Need help with 2 MySql Queries. Join vs Subqueries.

    - by BugBusterX
    I have 2 tables: user: id, name message: sender_id, receiver_id, message, read_at, created_at There are 2 results I need to retrieve and I'm trying to find the best solution. I have included queries that I'm using in the very end. I need to retrieve a list of users, and also with each user have information available whether there are any unread messages from each user (them as sender, me as receiver) and whether or not there are any read messages between us ( they send I'm receiver or I send they are receivers) I need Same as above, but only those members where there has been any messaging between us, sorted by unread first, then by last message received. Can you advise please? Should this be done with joins or subqueries? In first case I do not need the count, I just need to know whether or not there is at least one unread message. I'm posting code and my current queries, please have a look when you get a chance: BTW, everything is the way I want in fist query. My concern is: In second query I would like to order by messages.created_at, but I dont think I can do that with grouping? And also I dont know if this approach is the most optimized and fast. CREATE TABLE `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `user` VALUES (1,'User 1'),(2,'User 2'),(3,'User 3'),(4,'User 4'),(5,'User 5'); CREATE TABLE `message` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `sender_id` bigint(20) DEFAULT NULL, `receiver_id` bigint(20) DEFAULT NULL, `message` text, `read_at` datetime DEFAULT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `message` VALUES (1,3,1,'Messge',NULL,'2010-10-10 10:10:10'),(2,1,4,'Hey','2010-10-10 10:10:12','2010-10-10 10:10:11'),(3,4,1,'Hello','2010-10-10 10:10:19','2010-10-10 10:10:15'),(4,1,4,'Again','2010-10-10 10:10:25','2010-10-10 10:10:21'),(5,3,1,'Hiii',NULL,'2010-10-10 10:10:21'); SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) GROUP BY u.id SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) where m.id IS NOT NULL GROUP BY u.id

    Read the article

  • Securing paths in PHP

    - by tjm
    I'm writing some PHP which takes some paths to different content directories, and uses these to include various parts of pages later. I'm trying to ensure that the paths are as they seem, and none of them break the rules of the application. I have PRIVATEDIR which must lie above DOCUMENT_ROOT (aka) PUBLICDIR. CONTENTDIR which must lie within PRIVATEDIR and not go back below PUBLICDIR and some other *DIR's which must remain within CONTENTDIR. Currently I set up some defaults, and then override the ones the user specifies and then sanity check them with the following. private function __construct($options) { error_reporting(0); if(is_array($options)) { $this->opts = array_merge($this->opts, $options); } if($this->opts['STATUS']==='debug') { error_reporting(E_ALL | E_NOTICE | E_STRICT); } $this->opts['PUBLICDIR'] = realpath($_SERVER['DOCUMENT_ROOT']) .DIRECTORY_SEPARATOR; $this->opts['PRIVATEDIR'] = realpath($this->opts['PUBLICDIR'] .$this->opts['PRIVATEDIR']) .DIRECTORY_SEPARATOR; $this->opts['CONTENTDIR'] = realpath($this->opts['PRIVATEDIR'] .$this->opts['CONTENTDIR']) .DIRECTORY_SEPARATOR; $this->opts['CACHEDIR'] = realpath($this->opts['PRIVATEDIR'] .$this->opts['CACHEDIR']) .DIRECTORY_SEPARATOR; $this->opts['ERRORDIR'] = realpath($this->opts['CONTENTDIR'] .$this->opts['ERRORDIR']) .DIRECTORY_SEPARATOR; $this->opts['TEMPLATEDIR' = realpath($this->opts['CONTENTDIR'] .$this->opts['TEMPLATEDIR']) .DIRECTORY_SEPARATOR; // then here I have to check that PRIVATEDIR is above PUBLICDIR // and that all the rest remain within private dir and don't drop // down into (or below) PUBLICDIR again. And die with an error if // they don't conform. } The thing is this seems like a lot of work to do, especially as it must be run, every time a page is accessed, before I can do anything else, e.g check for a cached version of the page I'm serving. Part of me is thinking, since all of these paths are predefined by the maintainer of the site, they SHOULD be aware of what paths they are allowing access to and ensuring they are secure. But, I think I'm thinking that because currently I am said maintainer, and I KNOW my paths conform to the rules. That said, I do want to secure this thing from any accidental errors by future maintainers (and I bet, now I've said above "I KNOW...", probably from myself somewhere down the line). This just feels like a suboptimal solution. I wonder how fast this would really be and what you would suggest to improve it or as an alternative? Thanks.

    Read the article

  • Calculating all distances between one point and a group of points efficiently in R

    - by dbarbosa
    Hi, First of all, I am new to R (I started yesterday). I have two groups of points, data and centers, the first one of size n and the second of size K (for instance, n = 3823 and K = 10), and for each i in the first set, I need to find j in the second with the minimum distance. My idea is simple: for each i, let dist[j] be the distance between i and j, I only need to use which.min(dist) to find what I am looking for. Each point is an array of 64 doubles, so > dim(data) [1] 3823 64 > dim(centers) [1] 10 64 I have tried with for (i in 1:n) { for (j in 1:K) { d[j] <- sqrt(sum((centers[j,] - data[i,])^2)) } S[i] <- which.min(d) } which is extremely slow (with n = 200, it takes more than 40s!!). The fastest solution that I wrote is distance <- function(point, group) { return(dist(t(array(c(point, t(group)), dim=c(ncol(group), 1+nrow(group)))))[1:nrow(group)]) } for (i in 1:n) { d <- distance(data[i,], centers) which.min(d) } Even if it does a lot of computation that I don't use (because dist(m) computes the distance between all rows of m), it is way more faster than the other one (can anyone explain why?), but it is not fast enough for what I need, because it will not be used only once. And also, the distance code is very ugly. I tried to replace it with distance <- function(point, group) { return (dist(rbind(point,group))[1:nrow(group)]) } but this seems to be twice slower. I also tried to use dist for each pair, but it is also slower. I don't know what to do now. It seems like I am doing something very wrong. Any idea on how to do this more efficiently? ps: I need this to implement k-means by hand (and I need to do it, it is part of an assignment). I believe I will only need Euclidian distance, but I am not yet sure, so I will prefer to have some code where the distance computation can be replaced easily. stats::kmeans do all computation in less than one second.

    Read the article

  • javascript ul button dropdown crashes after if doing more than 1 button

    - by Apprentice Programmer
    So i have a button drop down list that i want users to select a choice, and basically the button will return the users selection.Pretty straight forward, tested on jsFiddle and works great. Using ruby on rails btw, so i'm not sure if it might conflicting the way rails handle javascript actions. Heres the code: <%= form_for(@user, :html => {:class => 'form-horizontal'}) do |f| %> <fieldset> <p>Do you have experience in Business? If yes, select one of the following: <div class="input-group"> <div class="input-group-btn btn-group"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown">Select one <span class="caret"></span></button> <ul class="dropdown-menu"> <li ><a href="#">Entrepreneurship</a></li> <li ><a href="#">Investments</a></li> <li ><a href="#">Management</a></li> <li class="divider"></li> <li ><a href="#">All of the Above</a></li> </ul> </div><!-- /btn-group --> <%= f.text_field :years_business , :class => "form-control", :placeholder => "Years of experience" %> </div> Now there are 2 more of these, and basically what happens is that if I select an item for the first time from the dropdown list, everything works great. But the moment I select the same button/or new button, the page immediately kind of refreshes, they selected value will not show up after the list drops down and user selects a value. I viewed the page source and added additional javascript src and types, but still doesnt work. the jquery code: jQuery(function ($) { $('.input-group-btn .dropdown-menu > li:not(.divider)').click(function(){ $(this).closest('ul').prev().text($(this).text()) }) }); Any suggestions what is causing the problem?? The jsfiddle link is here: http://jsfiddle.net/w4s8u/7/

    Read the article

  • Bulk update & occasional insert (coredata) - Too slow

    - by Andrew
    Update: Currently looking into NSSET's minusSet links: http://stackoverflow.com/questions/1475636/comparing-two-arrays Hi guys, Could benefit from your wisdom here.. I'm using Coredata in my app, on first launch I download a data file and insert over 500 objects (each with 60 attributes) - fast, no problem. Each subsequent launch I download an updated version of the file, from which I need to update all existing objects' attributes (except maybe 5 attributes) and create new ones for items which have been added to the downloaded file. So, first launch I get 500 objects.. say a week later my file now contains 507 items.. I create two arrays, one for existing and one for downloaded. NSArray *peopleArrayDownloaded = [CoreDataHelper getObjectsFromContext:@"person" :@"person_id" :YES :managedObjectContextPeopleTemp]; NSArray *peopleArrayExisting = [CoreDataHelper getObjectsFromContext:@"person" :@"person_id" :YES :managedObjectContextPeople]; If the count of each array is equal then I just do this: NSUInteger index = 0; if ([peopleArrayExisting count] == [peopleArrayDownloaded count]) { NSLog(@"Number of people downloaded is same as the number of people existing"); for (person *existingPerson in peopleArrayExisting) { person *tempPerson = [peopleArrayDownloaded objectAtIndex:index]; // NSLog(@"Updating id: %@ with id: %@",existingPerson.person_id,tempPerson.person_id); // I have 60 attributes which I to update on each object, is there a quicker way other than overwriting existing? index++; } } else { NSLog(@"Number of people downloaded is different to number of players existing"); So now comes the slow part. I end up using this (which is tooooo slow): NSLog(@"Need people added to the league"); for (person *tempPerson in peopeArrayDownloaded) { NSPredicate *predicate = [NSPredicate predicateWithFormat:@"person_id = %@",tempPerson.person_id]; // NSLog(@"Searching for existing person, person_id: %@",existingPerson.person_id); NSArray *filteredArray = [peopleArrayExisting filteredArrayUsingPredicate:predicate]; if ([filteredArray count] == 0) { NSLog(@"Couldn't find an existing person in the downloaded file. Adding.."); person *newPerson = [NSEntityDescription insertNewObjectForEntityForName:@"person" inManagedObjectContext:managedObjectContextPeople]; Is there a way to generate a new array of index items referring to the additional items in my downloaded file? Incidentally, on my tableViews I'm using NSFetchedResultsController so updating attributes will call [cell setNeedsDisplay]; .. about 60 times per cell, not a good thing and it can crash the app. Thanks for reading :)

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >