Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 781/903 | < Previous Page | 777 778 779 780 781 782 783 784 785 786 787 788  | Next Page >

  • what is file verification system for php project or licence checking the configuration files

    - by Jayapal Chandran
    Hi, My colleague asked me a question like "license check to config file". when i searched i got this http://www.google.com/search?q=file+verification+system&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a and in the result i got this http://integrit.sourceforge.net/texinfo/integrit.html but could not grasp much of its idea. Here is my thoughts... Our project is written in codeigniter. The project owner is providing it to their customer. The owner is a business partner with that concept. Besides, the owner needs control of the project code so that the customer will not break rules with him like changing the code or moving it go another server or validity. So the owner needs a system to enable disable the site. Let me give an example... owner.com will have an admin panel where he can either disable or enable the client.com. when he disables the client.com should display a custom message instead of loading the files. client.com is written i a way that i will process requests from owner.com and also the other way round. so, here i want a list of the concepts with which we can implement the ownership and control over client.com any suggestions, links, references, answers will be helpful. If i am missing something in my question i will update my question according to your comments if any so that the users can give in their idea without confusing of what i had asked. THX

    Read the article

  • Remove this URL string when login fails and simply show div error

    - by Anagio
    My developer built our registration page to display a div when logins failed based on a string in the URL. When logins fail this is added to the URL /login?msg=invalid The PHP in my login.phtml which displays the error messages based on the msg= parameter is <?php $msg = ""; $msg = $_GET['msg']; if($msg==""){ $showMsg = ""; } elseif($msg=="invalid"){ $showMsg = ' <div class="alert alert-error"> <a class="close" data-dismiss="alert">×</a> <strong>Error!</strong> Login or password is incorrect! </div>'; } elseif($msg=="disabled"){ $showMsg = "Your account has been disabled."; } elseif($msg==2){ $showMsg = "Your account is not activated. Please check your email."; } ?> In the controller the redirect to that URL is else //email id does not exist in our database { //redirecting back with invalid email(invalid) msg=invalid. $this->_redirect($url."?msg=invalid"); } I know there are a few other validation types for disabled accounts etc. I'm in the process of redesigning the entire interface and would like to get rid of this kind of validation so that the div tags display when logins fail but not show the URL strings. If it matters the new div I want to display is <div class="alert alert-error alert-login"> Email or password incorrect </div> I'd like to replace the php my self in my login.phtml and controller but not a good programmer. What can I replace $this->_redirect($url."?msg=invalid"); with so that no strings are added to the URL and display the appropriate div tags? Thanks

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • Problem in Building mplsh-run in lshkit

    - by Yijinsei
    Hi guy, been trying out this for quite some time but I'm still unable to built mplsh-run from lshkit Not sure if this would help to explain my situation during the building process /tmp/cc17kth4.o: In function `lshkit::MultiProbeLshRecallTable::reset(lshkit::MultiProbeLshModel, unsigned int, double, double)': mplsh-run.cpp:(.text._ZN6lshkit24MultiProbeLshRecallTable5resetENS_18MultiProbeLshModelEjdd[lshkit::MultiProbeLshRecallTable::reset(lshkit::MultiProbeLshModel, unsigned int, double, double)]+0x230): undefined reference to `lshkit::MultiProbeLshModel::recall(double) const' /tmp/cc17kth4.o: In function `void lshkit::MultiProbeLshIndex<unsigned int>::query_recall<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, float, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&) const': mplsh-run.cpp:(.text._ZNK6lshkit18MultiProbeLshIndexIjE12query_recallINS_11TopkScannerINS_6MatrixIfE8AccessorENS_6metric5l2sqrIfEEEEEEvPKffRT_[void lshkit::MultiProbeLshIndex<unsigned int>::query_recall<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, float, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&) const]+0x2c4): undefined reference to `lshkit::MultiProbeLsh::genProbeSequence(float const*, std::vector<unsigned int, std::allocator<unsigned int> >&, unsigned int) const' /tmp/cc17kth4.o: In function `void lshkit::MultiProbeLshIndex<unsigned int>::query<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, unsigned int, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&)': mplsh-run.cpp:(.text._ZN6lshkit18MultiProbeLshIndexIjE5queryINS_11TopkScannerINS_6MatrixIfE8AccessorENS_6metric5l2sqrIfEEEEEEvPKfjRT_[void lshkit::MultiProbeLshIndex<unsigned int>::query<lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> > >(float const*, unsigned int, lshkit::TopkScanner<lshkit::Matrix<float>::Accessor, lshkit::metric::l2sqr<float> >&)]+0x4a): undefined reference to `lshkit::MultiProbeLsh::genProbeSequence(float const*, std::vector<unsigned int, std::allocator<unsigned int> >&, unsigned int) const' collect2: ld returned 1 exit status the command that i used to built mplsh-run is g++ -I./lshkit/include -L/usr/lib -lm -lgsl -lgslcblas -lboost_program_options-mt mplsh-run.cpp Do you guys have any clue on how I could solve this?

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • Can I run a JavaScript function AFTER Google Loader has run?

    - by thatryan
    I am loading Google API using google.load() and I need to process some of what is built by it, but I need to run the JavaScript after it has already completely loaded, is there a way to ensure that happens? Here is how I build the list of images, I need to add an attribute to each img tag though, can't do that until after it is built right? google.load("feeds", "1"); function initialize() { var feed = new google.feeds.Feed("myfeed.rss"); feed.load(function(result) { if (!result.error) { var container = document.getElementById("feed"); for (var i = 0; i < result.feed.entries.length; i++) { var entry = result.feed.entries[i]; var entryTitle = entry.title; var entryContent = entry.content; imgContent = entryContent + "<p>"+entryTitle+"</p>"; var div = document.createElement("div"); div.className = "image"; div.innerHTML = imgContent; container.appendChild(div); } } }); } google.setOnLoadCallback(initialize);

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • PHP Transferring Photos From One Oracle Database Table to Another

    - by Jonathan Swift
    I am attempting to transfer a set of photos (blobs) from one table to another across databases. I'm nearly there, except for binding the photo parameter. I have the following code: $conn_db1 = oci_pconnect('username', 'password', 'db1'); $conn_db2 = oci_pconnect('username', 'password', 'db2'); $parse_db1_select = oci_parse($conn_db1, "SELECT REF PID, BINARY_OBJECT PHOTOGRAPH FROM BLOBS"); $parse_db2_insert = oci_parse($conn_db2, "INSERT INTO PHOTOGRAPHS (PID, PHOTOGRAPH) VALUES (:pid, :photo)"); oci_execute($parse_db1_select); while ($row = oci_fetch_assoc($parse_db1_select)) { $pid = $row['PID']; $photo = $row['PHOTOGRAPH']; oci_bind_by_name($parse_db2_insert, ':pid', $pid, -1, OCI_B_INT); // This line causes an error oci_bind_by_name($parse_db_insert, ':photo', $photo, -1, OCI_B_BLOB); oci_execute($parse_db2_insert); } oci_close($db1); oci_close($db2); But I get the following error, on the error line commented above: Warning: oci_execute() [function.oci-execute]: ORA-03113: end-of-file on communication channel Process ID: 0 Session ID: 790 Serial number: 118 Does anyone know the right way to do this?

    Read the article

  • How do I tweak columns in a Flat File Destination in SSIS?

    - by theog
    I have an OLE DB Data source and a Flat File Destination in the Data Flow of my SSIS Project. The goal is simply to pump data into a text file, and it does that. Where I'm having problems is with the formatting. I need to be able to rtrim() a couple of columns to remove trailing spaces, and I have a couple more that need their leading zeros preserved. The current process is losing all the leading zeros. The rtrim() can be done by simple truncation and ignoring the truncation errors, but that's very inelegant and error prone. I'd like to find a better way, like actually doing the rtrim() function where needed. Exploring similar SSIS questions & answers on SO, the thing to do seems to be "Use a Script Task", but that's ususally just thrown out there with no details, and it's not at all an intuitive thing to set up. I don't see how to use scripting to do what I need. Do I use a Script Task on the Control Flow, or a Script Component in the Data Flow? Can I do rtrim() and pad strings where needed in a script? Anybody got an example of doing this or similar things? Many thanks in advance.

    Read the article

  • How to enumerate word document using office interop API?

    - by Shekhar
    Hello everyone, I want to traverse through all the elements of an word document one by one and according to type of element (header, sentence, table,image,textbox, shape, etc.) I want to process that element. I tried to search any enumerator or object which can represent elements of document in office interop API but failed to find any. API offers sentences, paragraphs, shapes collections but doesnt provide generic object which can point to next element. For example : <header of document> <plain text sentences> <table with many rows,columns> <text box> <image> <footer> (Please imagine it as a word document) So, now I want some enumerator which will first give me <header of document>, then on next iteration give me <plain text sentences>, then <table with many rows,columns> and so on. Does anyone knows how we can achieve this? Is it possible? I am using C#, visual studio 2005 and Word 2003. Thanks a lot

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • jQuery removing elements from DOM put still reporting as present

    - by RyanP13
    Hi, I have an address finder system whereby a user enters a postcode, if postcode is validated then an address list is returned and displayed, they then select an address line, the list dissappears and then the address line is split further into some form inputs. The issue i am facing is when they have been through the above process then cleared the postcode form field, hit the find address button and the address list re-appears. Event though the list and parent tr have been removed from the DOM it is still reporting it is present as length 1? My code is as follows: jQuery // when postcode validated display box var $addressList = $("div#selectAddress > ul").length; // if address list present show the address list if ($addressList != 0) { $("div#selectAddress").closest("tr").removeClass("hide"); } // address list hidden by default // if coming back to modify details then display address inputs var $customerAddress = $("form#detailsForm input[name*='customerAddress']"); var $addressInputs = $.cookies.get('cpqbAddressInputs'); if ($addressInputs) { if ($addressInputs == 'visible') { $($customerAddress).closest("tr").removeClass("hide"); } } else { $($customerAddress).closest("tr").addClass("hide"); } // Need to change form action URL to call post code web service $("input.findAddress").live('click', function(){ var $postCode = encodeURI($("input#customerPostcode").val()); if ($postCode != "") { var $formAction = "customerAction.do?searchAddress=searchAddress&custpc=" + $postCode; $("form#detailsForm").attr("action", $formAction); } else { alert($addressList);} }); // darker highlight when li is clicked // split address string into corresponding inputs $("div#selectAddress ul li").live('click', function(){ $(this).removeClass("addressHover"); //$("li.addressClick").removeClass("addressClick"); $(this).addClass("addressClick"); var $splitAddress = $(this).text().split(","); $($customerAddress).each(function(){ var $inputCount = $(this).index("form#detailsForm input[name*='customerAddress']"); $(this).val($splitAddress[$inputCount]); }); $($customerAddress).closest("tr").removeClass("hide"); $.cookies.set('cpqbAddressInputs', 'visible'); $(this).closest("tr").fadeOut(250, function() { $(this).remove(); }); });

    Read the article

  • How to redirect registry access of a dll loaded by my program

    - by dummzeuch
    I have got a dll that I load in my program which reads and writes its settings to the registry (hkcu). My program changes these settings prior to loading the dll so it uses the settings my program wants it to use which works fine. Unfortunately I need to run several instances of my program with different settings for the dll. Now the approach I have used so far no longer works reliably because it is possible for one instance of the program to overwrite the settings that another instance just wrote before the dll has a chance to read them. I haven't got the source of the dll in question and I cannot ask the programmer who wrote it to change it. One idea I had, was to hook registry access functions and redirect them to a different branch of the registry which is specific to the instance of my program (e.g. use the process id as part of the path). I think this should work but maybe you have got a different / more elegant. In case it matters: I am using Delphi 2007 for my program, the dll is probably written in C or C++.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Different standard streams per POSIX thread

    - by Roman Nikitchenko
    Is there any possibility to achieve different redirections for standard output like printf(3) for different POSIX thread? What about standard input? I have lot of code based on standard input/output and I only can separate this code into different POSIX thread, not process. Linux operation system, C standard library. I know I can refactor code to replace printf() to fprintf() and further in this style. But in this case I need to provide some kind of context which old code doesn't have. So doesn't anybody have better idea (look into code below)? #include <pthread.h> #include <stdio.h> void* different_thread(void*) { // Something to redirect standard output which doesn't affect main thread. // ... // printf() shall go to different stream. printf("subthread test\n"); return NULL; } int main() { pthread_t id; pthread_create(&id, NULL, different_thread, NULL); // In main thread things should be printed normally... printf("main thread test\n"); pthread_join(id, NULL); return 0; }

    Read the article

  • Most elegant way to break CSV columns into separate data structures using Python?

    - by Nick L
    I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python?

    Read the article

  • ArrayCollection loop through for matching items

    - by charlie
    Hi I hope someone can help me..... i am trying to build a dynamic form for a questionnaire module. Building on some previous posts I am using the process similar to that in question "http://stackoverflow.com/questions/629021/how-to-generate-a-formmxform-dynamically-in-flex" i have managed to prove out the fact of extending the XML to include a calendar, combobox etc. my problem is that now need to get the data from an ArrayCollection rather than from an xml file. I am looking to loop through the AC and where type = "text" render a textinput field, where a type ="calendar" render a Calendar etc etc. my code so far just looking at a textinput field (and sorry for all the comments included ;) is:- [Bindable] public var AC:ArrayCollection = new ArrayCollection( [ {type:'text', direction:'horizontal', tooltip:'test tooltip', label:'my textbox label', id:'1'}, {type:'text', direction:'horizontal', tooltip:'another tooltip', label:'another label', id:'2'} ]); private function init():void { var form:Form = new Form(); for each(var elements:XML in AC) { switch( [email protected]()) { case "text": var fi:FormItem = new FormItem(); // fi.toolTip = elements.tooltip.toString(); // fi.required = getglobalprofile.required.toString(); // fi.direction = getglobalprofileb[i].@direction; var li:Label = new Label(); // li.text = getglobalprofileb[i].@label; // li.width = 100; var ti:TextInput = new TextInput(); ti.text = "test"; ti.width = 200; form.addChild(fi); fi.addChild(li); fi.addChild(ti); // break; } } this.addChild( form); } ]] <mx:Form id="form" name="form"> </mx:Form> if you are interested in the working xml version (rendering only) let me know and i will post this as well

    Read the article

  • HTTP Negotiate windows vs. Unix server implementation using python-kerberos

    - by ondra
    I tried to implement a simple single-sign-on in my python web server. I have used the python-kerberos package which works nicely. I have tested it from my Linux box (authenticating against active directory) and it was without problem. However, when I tried to authenticate using Firefox from Windows machine (no special setup, just having the user logged into the domain + added my server into negotiate-auth.trusted-uris), it doesn't work. I have looked at what is sent and it doesn't even resemble the things the Linux machine sends. This Microsoft description of the process pretty much resembles the way my interaction from Linux works, but the Windows machine generally sends a very short string, which doesn't even resemble the things microsoft documentation states, and when base64 decoded, it is something like 12 zero bytes followed by 3 or 4 non-zero bytes (GSS functions then return that it doesn't support such scheme) Either there is something wrong with the client Firefox settings, or there is some protocol which I am supposed to follow for the Negotiate protocol, but which I cannot find any reference anywhere. Any ideas what's wrong? Do you have any idea what protocol I should by trying to find, as it doesn' look like SPNEGO, at least from MS documentation.

    Read the article

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • Utility of List<T>.Sort() versus List<T>.OrderBy() for a member of a custom container class

    - by ccomet
    I've found myself running back through some old 3.5 framework legacy code, and found some points where there are a whole bunch of lists and dictionaries that must be updated in a synchronized fashion. I've determined that I can make this process infinitely easier to both utilize and understand by converging these into custom container classes of new custom classes. There are some points, however, where I came to concerns with organizing the contents of these new container classes by a specific inner property. For example, sorting by the ID number property of one class. As the container classes are primarily based around a generic List object, my first instinct was to write the inner classes with IComparable, and write the CompareTo method that compares the properties. This way, I can just call items.Sort() when I want to invoke the sorting. However, I've been thinking instead about using items = items.OrderBy(Func) instead. This way it is more flexible if I need to sort by any other property. Readability is better as well, since the property used for sorting will be listed in-line with the sort call rather than having to look up the IComparable code. The overall implementation feels cleaner as a result. I don't care for premature or micro optimization, but I like consistency. I find it best to stick with one kind of implementation for as many cases as it is appropriate, and use different implementations where it is necessary. Is it worth it to convert my code to use the LINQ OrderBy instead of using List.Sort? Is it a better practice to stick with the IComparable implementation for these custom containers? Are there any significant mechanical advantages offered by either path that I should be weighing the decision on? Or is their end-functionality equivalent to the point that it just becomes coder's preference?

    Read the article

  • getting a "default" concrete class that implements an interface

    - by Roger Joys
    I am implementing a custom (and generic) Json.net serializer and hit a bump in the road that I could use some help on. When the deserializer is mapping to a property that is an interface, how can I best determine what sort of object to construct to deserialize to to place into the interface property. I have the following: [JsonConverter(typeof(MyCustomSerializer<foo>))] class foo { int Int1 { get; set; } IList<string> StringList {get; set; } } My serializer properly serializes this object, and but when it comes back in, and I try to map the json parts to to object, I have a JArray and an interface. I am currently instantiating anything enumerable like List as theList = Activator.CreateInstance(property.PropertyType); This works create to work with in the deserialization process, but when the property is IList, I get runtime complaints (obviously) about not being able to instantiate an interface. So how would I know what type of concrete class to create in a case like this? Thank you

    Read the article

  • Is there any other way of using signed applets

    - by 640KB
    Hi There, If I want to deploy high privileged applets they need to be signed. For that a certificate is created and then a jar file is signed with a jarsigner. After that in the HTML code one has to specify code,codebase AND archive (jar) which we signed before. However I wrote a servlet which acts as two things: it sits at the URL pointed by the codebase and serves class bytecode to the applet. The same servlet also uses serialization to communicate with the applet whereby whenever the applet gets a class it does not know it goes to the codebase which ends up back at the servlet. Almost like a mini RMI setup but simpler. I hope you can see the power in this. Unfortunately for signed applets one needs the archive. Now the servlet is also able to load a Certificate object and can send it to the applet too. So here is the setup: At one point the applet receives class bytecode and it also has the Certificate. It would be nice if the applet could instantiate all received classes using that certificate (otherwise code from jar is signed and outside is not which prompts nasty messages to the user). So my question to you fine Java aficionados: Would there by any way for me to use the bytecode data and the Certificate to instantiate the class as a signed object so that the plugin pops the Security dialog, accepts teh certificate and elevates the object's privileges. What I could find is that the there is a class CodeSource that accepts codebase URL and certificate and is essential to the signing process. What I am not sure is how one could intercept the class loading inside applets to install additional certificates not obtained through a JAR file via archive. What do you say? Thanks a bunch.

    Read the article

  • CakePHP: Interaction between different files/classes

    - by Alexx Hardt
    Hey, I'm cloning a commercial student management system. Students use the frontend to apply for lectures, uni staff can modify events (time, room, etc). The core of the app will be the algortihm which distributes the seats to students. I already asked about it here: How to implement a seat distribution algorithm for uni lectures Now, I found a class for that algorithm here: http://www.phpclasses.org/browse/file/10779.html I put the 'class GA' into app/vendors. I need to write a 'class Solution', which represents one object (a child, and later a parent for the evolutionary process). I'll also have to write functions mutate(), crossover() and fitness(). fitness calculates a score of a solution, based on if there are overbooked courses etc; crossover() is the crazy monkey sex function which produces a child from two parents, and mutate() modifies a child after crossover. Now, the fitness()-function needs to access a few related models, and their find()-functions. It evaluates a solution's fitness by checking e.g. if there are overbooked courses, or unfulfilled wishes, and penalizes that. Where would I put the ga.php, solution.php and the three functions? ga.php has to access the functions, but the functions have to access the models. I also don't want to call any App::import()'s from within the fitness()-function, because it gets called many thousand times when the algorithm runs. Hope someone can help me. Thanks in advance =)

    Read the article

< Previous Page | 777 778 779 780 781 782 783 784 785 786 787 788  | Next Page >