Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 760/882 | < Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >

  • Problems with objectatasource,giving attributes like delete insert and update

    - by kamal
    After going to the process of adding the various attributes like insert,delete and update.But when i run it through the browser ,editing works but updating and deleting doesn't !(for the update and shows the same thing for delete,my friends think i need to use codes to repair the problems,can you help me please.it shows this: Server Error in '/WebSite3' Application. ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id.] System.Web.UI.WebControls.ObjectDataSourceView.GetResolvedMethodData(Type type, String methodName, IDictionary allParameters, DataSourceOperation operation) +1119426 System.Web.UI.WebControls.ObjectDataSourceView.ExecuteUpdate(IDictionary keys, IDictionary values, IDictionary oldValues) +1008 System.Web.UI.DataSourceView.Update(IDictionary keys, IDictionary values, IDictionary oldValues, DataSourceViewOperationCallback callback) +92 System.Web.UI.WebControls.GridView.HandleUpdate(GridViewRow row, Int32 rowIndex, Boolean causesValidation) +907 System.Web.UI.WebControls.GridView.HandleEvent(EventArgs e, Boolean causesValidation, String validationGroup) +704 System.Web.UI.WebControls.GridView.OnBubbleEvent(Object source, EventArgs e) +95 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.GridViewRow.OnBubbleEvent(Object source, EventArgs e) +123 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.LinkButton.OnCommand(CommandEventArgs e) +118

    Read the article

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • Stop an executing recursive javascript function

    - by DA
    Using jQuery, I've build an image/slide rotator. The basic setup is (in pseudocode): function setupUpSlide(SlideToStartWith){ var thisSlide = SlideToStartWith; ...set things up... fadeInSlide(thisSlide) } function fadeInSlide(thisSlide){ ...fade in this slide... fadeOutSlide(thisSlide) } function fadeOutSlide(thisSlide){ ...fade out this slide... setupUpSlide(nextSlide) } I call the first function and pass in a particular slide index, and then it does its thing calling chain of functions which then, in turn, calls the first function again passing in the next index. This then repeats infinitely (resetting the index when it gets to the last item). This works just fine. What I want to do now is allow someone to over-ride the slide show by being able to click on a particular slide number. Therefore, if slide #8 is showing and I click #3, I want the recursion to stop and then call the initial function passing in slide #3, which then, in turn, will start the process again. But I'm not sure how to go about that. How does one properly 'break' a recursive script. Should I create some sort of global 'watch' variable that if at any time is 'true' will return: false and allow the new function to execute?

    Read the article

  • Multi-threaded library calls in ASP.NET page request.

    - by ProfK
    I have an ASP.NET app, very basic, but right now too much code to post if we're lucky and I don't have to. We have a class called ReportGenerator. On a button click, method GenerateReports is called. It makes an async call to InternalGenerateReports using ThreadPool.QueueUserWorkItem and returns, ending the ASP.NET response. It doesn't provide any completion callback or anything. InternalGenerateReports creates and maintains five threads in the threadpool, one report per thread, also using QueueUserWorkItem, by 'creating' five threads, also with and waiting until calls on all of them complete, in a loop. Each thread uses an ASP.NET ReportViewer control to render a report to HTML. That is, for 200 reports, InternalGenerateReports should create 5 threads 40 times. As threads complete, report data is queued, and when all five have completed, report data is flushed to disk. My biggest problems are that after running for just one report, the aspnet process is 'hung', and also that at around 200 reports, the app just hangs. I just simplified this code to run in a single thread, and this works fine. Before we get into details like my code, is there anything obvious in the above scendario that might be wrong?

    Read the article

  • SQL Server compare table entries for update

    - by Dave
    I have a trade table with several million rows. Each row represents the version of a trade. If I'm given a possibly new trade I compare it to the latest version in the trade table. If it has changed I add a new version, otherwise I do nothing. In order to compare the 2 trades I read the version from the trade table into my application. This doesn't work well when I'm given 10s of thousands of possibly new trades. Even batching reads to read in a 1000 trades at once and compare them the whole process can take several minutes. All the time is spent in the DB. I'm trying to find a way to compare the possibly new trades to the ones in the trade table without so much I/O. What I've come up with so far is adding a hash column to each row in the trade table. The hash is of all the trade fields. Then when I'm given possibly new trades I compute their hash, put the values into a temporary table, then find ones that are different. This feels very hacky. Is there a better way of doing it? Thanks

    Read the article

  • Big problems on iPhone ad hoc build -

    - by phil swenson
    No matter what I do I can't get my ad hoc provisioning profile to work. In Organizer, I always get "A valid signing identity matching this profile cannot be found in your keychain" for my adhoc profile. I have my distribution cert installed in my login keychain. I dragged the adhoc mobileprovision file to XCode... that's pretty much all there is to it, right? I searched around and found suggestions like re-creating the cert/profile. Did that, same thing. Also make sure your login keychain is default. It is. Even tried a different computer. Same result. In XCode AdHoc target I don't have a distribution target to pick. This all used to work, but obviously I messed something up..... Perhaps my process is just wrong (it's been months since I did this). Does someone have a step by step list of how to set up for adhoc distribution?

    Read the article

  • XCode 4.4 bundle version updates not picked up until subsequent build

    - by Mark Struzinski
    I'm probably missing something simple here. I am trying to auto increment my build number in XCode 4.4 only when archiving my application (in preparation for a TestFlight deployment). I have a working shell script that runs on the target and successfully updates the info.plist file for each build. My build configuration for archiving is name 'Ad-Hoc'. Here is the script: if [ $CONFIGURATION == Ad-Hoc ]; then echo "Ad-Hoc build. Bumping build#..." plist=${PROJECT_DIR}/${INFOPLIST_FILE} buildnum=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${plist}") if [[ "${buildnum}" == "" ]]; then echo "No build number in $plist" exit 2 fi buildnum=$(expr $buildnum + 1) /usr/libexec/Plistbuddy -c "Set CFBundleVersion $buildnum" "${plist}" echo "Bumped build number to $buildnum" else echo $CONFIGURATION " build - Not bumping build number." fi This script updates the plist file appropriately and is reflected in XCode each time I archive. The problem is that the .ipa file that comes out of the archive process is still showing the previous build number. I have tried the following solutions with no success: Clean before build Clean build folder before build Move Run Script phase to directly after the Target Dependencies step in Build Phases Adding the script as a Run Script action in my scheme as a pre-action No matter what I do, when I look at the build log, I see that the info.plist file is being processed as one of the very first steps. It is always prior to my script running and updating the build number, which is, I assume, why the build number is never current in the .ipa file. Is there a way to force the Run Script phase to run before the info.plist file is processed?

    Read the article

  • strange memory error when deleting object from Core Data

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements from the xml file. This method looks like this: -(void) emptyDataContext { NSMutableArray* mutableFetchResults = [CoreDataHelper getObjectsFromContext:@"Condition" :@"road" :NO :managedObjectContext]; NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the preceeding as it did the first time. All is well until its time to delete the core data objects again. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. See this image for the debugger window as it appears when walking through the deletion FOR loop on the second iteration. Conditions is the fetched array of 7 objects with the objects below. Condition should be an individual condition. link text As you can see 'condition' does not match any of the objects in the 'conditions' array. I'm sure this is why I'm getting the memory access errors. Just not sure why this fetch (or the FOR) is returning a wrong reference. All the code that successfully performes this function on the first iteration is used in the second but with very different results. Thanks in advance for the help!

    Read the article

  • Delaying LINQ to SQL Select Query Execution

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL. In my search method that has some required and some optional parameters, I want to build a LINQ query while testing for the existence of those optional parameters. Here's what I'm currently thinking: using(var db = new DBDataContext()) { IQueryable<Listing> query = null; //Handle required parameter query = db.Listings.Where(l => l.Lat >= form.bounds.extent1.latitude && l.Lat <= form.bounds.extent2.latitude); //Handle optional parameter if (numStars != null) query = query.Where(l => l.Stars == (int)numStars); //Other parameters... //Execute query (does this happen here?) var result = query.ToList(); //Process query... Will this implementation "bundle" the where clauses and then execute the bundled query? If not, how should I implement this feature? Also, is there anything else that I can improve? Thanks in advance.

    Read the article

  • what is file verification system for php project or licence checking the configuration files

    - by Jayapal Chandran
    Hi, My colleague asked me a question like "license check to config file". when i searched i got this http://www.google.com/search?q=file+verification+system&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a and in the result i got this http://integrit.sourceforge.net/texinfo/integrit.html but could not grasp much of its idea. Here is my thoughts... Our project is written in codeigniter. The project owner is providing it to their customer. The owner is a business partner with that concept. Besides, the owner needs control of the project code so that the customer will not break rules with him like changing the code or moving it go another server or validity. So the owner needs a system to enable disable the site. Let me give an example... owner.com will have an admin panel where he can either disable or enable the client.com. when he disables the client.com should display a custom message instead of loading the files. client.com is written i a way that i will process requests from owner.com and also the other way round. so, here i want a list of the concepts with which we can implement the ownership and control over client.com any suggestions, links, references, answers will be helpful. If i am missing something in my question i will update my question according to your comments if any so that the users can give in their idea without confusing of what i had asked. THX

    Read the article

  • PHP Transferring Photos From One Oracle Database Table to Another

    - by Jonathan Swift
    I am attempting to transfer a set of photos (blobs) from one table to another across databases. I'm nearly there, except for binding the photo parameter. I have the following code: $conn_db1 = oci_pconnect('username', 'password', 'db1'); $conn_db2 = oci_pconnect('username', 'password', 'db2'); $parse_db1_select = oci_parse($conn_db1, "SELECT REF PID, BINARY_OBJECT PHOTOGRAPH FROM BLOBS"); $parse_db2_insert = oci_parse($conn_db2, "INSERT INTO PHOTOGRAPHS (PID, PHOTOGRAPH) VALUES (:pid, :photo)"); oci_execute($parse_db1_select); while ($row = oci_fetch_assoc($parse_db1_select)) { $pid = $row['PID']; $photo = $row['PHOTOGRAPH']; oci_bind_by_name($parse_db2_insert, ':pid', $pid, -1, OCI_B_INT); // This line causes an error oci_bind_by_name($parse_db_insert, ':photo', $photo, -1, OCI_B_BLOB); oci_execute($parse_db2_insert); } oci_close($db1); oci_close($db2); But I get the following error, on the error line commented above: Warning: oci_execute() [function.oci-execute]: ORA-03113: end-of-file on communication channel Process ID: 0 Session ID: 790 Serial number: 118 Does anyone know the right way to do this?

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • gpg error - connection already closed?

    - by OopsForgotMyOtherUserName
    omg... hope someone can help me because I am so lost as to what to try next.... I don't know what is causing the error to happen, and I don't see how to figure it out... Keep going between the pgloader.conf examples and what I have, and I don't understand why I keep getting the 'connection already closed' error? The first few lines of my fr.conf is at the very end... I'd really appreciate / love some guidance here... Been trying to get this thing going all morning, and am even getting stuck just on this part... Running this command at the command line: /usr/bin/pgloader -c /var/mybin/pgconfs/fr.conf Yields this in the pgloader.log (with the process just hanging) more /tmp/pgloader.log 27-03-2010 12:22:53 pgloader INFO Logger initialized 27-03-2010 12:22:53 pgloader INFO Reformat path is ['/usr/share/python-support/pgloader/reformat'] 27-03-2010 12:22:53 pgloader INFO Will consider following sections: 27-03-2010 12:22:53 pgloader INFO fixed 27-03-2010 12:22:54 fixed INFO fixed processing 27-03-2010 12:22:54 pgloader INFO All threads are started, wait for them to terminate 27-03-2010 12:22:57 fixed ERROR connection already closed 27-03-2010 12:22:57 fixed INFO closing current database connection [pgsql] host = localhost port = 5432 base = frdb user = username pass = password [fixed] table = fr format = fixed filename = /var/www/fr.txt ...

    Read the article

  • jQuery removing elements from DOM put still reporting as present

    - by RyanP13
    Hi, I have an address finder system whereby a user enters a postcode, if postcode is validated then an address list is returned and displayed, they then select an address line, the list dissappears and then the address line is split further into some form inputs. The issue i am facing is when they have been through the above process then cleared the postcode form field, hit the find address button and the address list re-appears. Event though the list and parent tr have been removed from the DOM it is still reporting it is present as length 1? My code is as follows: jQuery // when postcode validated display box var $addressList = $("div#selectAddress > ul").length; // if address list present show the address list if ($addressList != 0) { $("div#selectAddress").closest("tr").removeClass("hide"); } // address list hidden by default // if coming back to modify details then display address inputs var $customerAddress = $("form#detailsForm input[name*='customerAddress']"); var $addressInputs = $.cookies.get('cpqbAddressInputs'); if ($addressInputs) { if ($addressInputs == 'visible') { $($customerAddress).closest("tr").removeClass("hide"); } } else { $($customerAddress).closest("tr").addClass("hide"); } // Need to change form action URL to call post code web service $("input.findAddress").live('click', function(){ var $postCode = encodeURI($("input#customerPostcode").val()); if ($postCode != "") { var $formAction = "customerAction.do?searchAddress=searchAddress&custpc=" + $postCode; $("form#detailsForm").attr("action", $formAction); } else { alert($addressList);} }); // darker highlight when li is clicked // split address string into corresponding inputs $("div#selectAddress ul li").live('click', function(){ $(this).removeClass("addressHover"); //$("li.addressClick").removeClass("addressClick"); $(this).addClass("addressClick"); var $splitAddress = $(this).text().split(","); $($customerAddress).each(function(){ var $inputCount = $(this).index("form#detailsForm input[name*='customerAddress']"); $(this).val($splitAddress[$inputCount]); }); $($customerAddress).closest("tr").removeClass("hide"); $.cookies.set('cpqbAddressInputs', 'visible'); $(this).closest("tr").fadeOut(250, function() { $(this).remove(); }); });

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

  • Remove this URL string when login fails and simply show div error

    - by Anagio
    My developer built our registration page to display a div when logins failed based on a string in the URL. When logins fail this is added to the URL /login?msg=invalid The PHP in my login.phtml which displays the error messages based on the msg= parameter is <?php $msg = ""; $msg = $_GET['msg']; if($msg==""){ $showMsg = ""; } elseif($msg=="invalid"){ $showMsg = ' <div class="alert alert-error"> <a class="close" data-dismiss="alert">×</a> <strong>Error!</strong> Login or password is incorrect! </div>'; } elseif($msg=="disabled"){ $showMsg = "Your account has been disabled."; } elseif($msg==2){ $showMsg = "Your account is not activated. Please check your email."; } ?> In the controller the redirect to that URL is else //email id does not exist in our database { //redirecting back with invalid email(invalid) msg=invalid. $this->_redirect($url."?msg=invalid"); } I know there are a few other validation types for disabled accounts etc. I'm in the process of redesigning the entire interface and would like to get rid of this kind of validation so that the div tags display when logins fail but not show the URL strings. If it matters the new div I want to display is <div class="alert alert-error alert-login"> Email or password incorrect </div> I'd like to replace the php my self in my login.phtml and controller but not a good programmer. What can I replace $this->_redirect($url."?msg=invalid"); with so that no strings are added to the URL and display the appropriate div tags? Thanks

    Read the article

  • Using pointers to adjust global objects in objective-c

    - by Rob
    Ok, so I am working with two sets of data that are extremely similar, and at the same time, these data sets are both global NSMutableArrays within the object. data_set_one = [[NSMutableArray alloc] init]; data_set_two = [[NSMutableArray alloc] init]; Two new NSMutableArrays are loaded, which need to be added to the old, existing data. These Arrays are also global. xml_dataset_one = [[NSMutableArray alloc] init]; xml_dataset_two = [[NSMutableArray alloc] init]; To reduce code duplication (and because these data sets are so similar) I wrote a void method within the class to handle the data combination process for both Arrays: -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str Now, I have a decent understanding of object oriented programming, so I was thinking that if I were to invoke the method with the global Arrays in the data like so... [self constructData:data_set_one fromDownloadArray:xml_dataset_one withMatchSelector:@"id"]; Then the global NSMutableArrays (data_set_one) would reflect the changes that happen to "array" within the method. Sadly, this is not the case, data_set_one doesn't reflect the changes (ex: new objects within the Array) outside of the method. Here is a code snippet of the problem // data_set_one is empty // xml_dataset_one has a few objects [constructData:(NSMutableArray *)data_set_one fromDownloadArray:(NSMutableArray *)xml_dataset_one withMatchSelector:(NSString *)@"id"]; // data_set_one should now be xml_dataset_one, but when echoed to screen, it appears to remain empty And here is the gist of the code for the method, any help is appreciated. -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str { if ([data count] == 0) { data = down; // set data equal to downloaded data } else if ([down count] == 0) { // download yields no results, do nothing } else { // combine the two arrays here } } This project is not ARC enabled. Thanks for the help guys! Rob

    Read the article

  • Problem in Building mplsh-run in lshkit

    - by Yijinsei
    Hi guy, been trying out this for quite some time but I'm still unable to built mplsh-run from lshkit Not sure if this would help to explain my situation during the building process /tmp/cc17kth4.o: In function `lshkit::MultiProbeLshRecallTable::reset(lshkit::MultiProbeLshModel, unsigned int, double, double)': mplsh-run.cpp:(.text._ZN6lshkit24MultiProbeLshRecallTable5resetENS_18MultiProbeLshModelEjdd[lshkit::MultiProbeLshRecallTable::reset(lshkit::MultiProbeLshModel, unsigned int, double, double)]+0x230): undefined reference to `lshkit::MultiProbeLshModel::recall(double) const' /tmp/cc17kth4.o: In function `void lshkit::MultiProbeLshIndex<unsigned int>::query_recall<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, float, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&) const': mplsh-run.cpp:(.text._ZNK6lshkit18MultiProbeLshIndexIjE12query_recallINS_11TopkScannerINS_6MatrixIfE8AccessorENS_6metric5l2sqrIfEEEEEEvPKffRT_[void lshkit::MultiProbeLshIndex<unsigned int>::query_recall<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, float, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&) const]+0x2c4): undefined reference to `lshkit::MultiProbeLsh::genProbeSequence(float const*, std::vector<unsigned int, std::allocator<unsigned int> >&, unsigned int) const' /tmp/cc17kth4.o: In function `void lshkit::MultiProbeLshIndex<unsigned int>::query<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, unsigned int, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&)': mplsh-run.cpp:(.text._ZN6lshkit18MultiProbeLshIndexIjE5queryINS_11TopkScannerINS_6MatrixIfE8AccessorENS_6metric5l2sqrIfEEEEEEvPKfjRT_[void lshkit::MultiProbeLshIndex<unsigned int>::query<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, unsigned int, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&)]+0x4a): undefined reference to `lshkit::MultiProbeLsh::genProbeSequence(float const*, std::vector<unsigned int, std::allocator<unsigned int> >&, unsigned int) const' collect2: ld returned 1 exit status the command that i used to built mplsh-run is g++ -I./lshkit/include -L/usr/lib -lm -lgsl -lgslcblas -lboost_program_options-mt mplsh-run.cpp Do you guys have any clue on how I could solve this?

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Most elegant way to break CSV columns into separate data structures using Python?

    - by Nick L
    I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python?

    Read the article

  • How do I tweak columns in a Flat File Destination in SSIS?

    - by theog
    I have an OLE DB Data source and a Flat File Destination in the Data Flow of my SSIS Project. The goal is simply to pump data into a text file, and it does that. Where I'm having problems is with the formatting. I need to be able to rtrim() a couple of columns to remove trailing spaces, and I have a couple more that need their leading zeros preserved. The current process is losing all the leading zeros. The rtrim() can be done by simple truncation and ignoring the truncation errors, but that's very inelegant and error prone. I'd like to find a better way, like actually doing the rtrim() function where needed. Exploring similar SSIS questions & answers on SO, the thing to do seems to be "Use a Script Task", but that's ususally just thrown out there with no details, and it's not at all an intuitive thing to set up. I don't see how to use scripting to do what I need. Do I use a Script Task on the Control Flow, or a Script Component in the Data Flow? Can I do rtrim() and pad strings where needed in a script? Anybody got an example of doing this or similar things? Many thanks in advance.

    Read the article

< Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >