Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 164/195 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • jQuery addClass() not running before jQuery.ajax()

    - by Josh
    I'm trying to have a button that onclick will apply a class loading (sets cursor to "wait") to the body, before making a series of ajax requests. Code is: $('#addSelected').click(function(){ $('body').addClass('loading'); var products = $(':checkbox[id^=add_]:checked'); products.each(function(){ var prodID = $(this).attr('id').replace('add_', ''); var qty = $('#qty_' + prodID).val(); if($('#prep_' + prodID).val()) { prodID += ':' + $('#prep_' + prodID).val(); } // Have to use .ajax rather than .get so we can use async:false, otherwise // product adds happen in parallel, causing data to be lost. $.ajax({ url: '<?=base_url()?>basket/update/' + prodID + '/' + qty, async: false, beforeSend: function(){ $('body').addClass('loading'); } }); }); }); I've tried doing $('body').addClass('loading'); both before the requests, and as a beforeSend callback, but there is no difference. In firebug I can see that body doesn't get the loading class until after the requests are complete. Any ideas?

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

  • Google Web Toolkit or Microsoft Technology (Silverlight, ASP.NET)

    - by NativeByte
    We have a large code base in MFC and VB. A few applications are in .NET. All these applications interoperate with each other on the user's machine and also connect with Unix servers via sockets. Recently we have started discussing a re-write of our applications and possibility of moving a lot of these desktop applications to web (they would run in intranet). A straight forward way is rewritting them in one of the .NET technologies. But a suggestion about using Google Web tookit has popped up and the argument is that it would help creating applications that would run in a browser on both desktop and mobile devices. One of the key problem that I see is that GWT is a large abstraction over Javascript. This will require the team to learn GWT, Javascript, IDEs etc as their experience has been primarily Microsoft technologies and not Java. It would be easier for them to learn .NET technologies instead of GWT. I do not have a depth of GWT and its drawback pittfalls and do not know about a parallel Microsoft Technology that I should investigate. So I would appreciate if people here can share their views or experiences using GWT or equivalent Microsoft technology.

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

  • Netlogo: error when putting variable in table, only constants allowe??

    - by Chantal
    Hello, Currently I am working on a Netlogo program where I need to use nodes and links for vehicle routing problem. (links are called streets in the program) Here I have some practical problems of how to input variable linkspeed in a table with another node. Constants like 200 etc are fine. Online I found some examples where variables are used, but I do not know why I keep getting the following error: Expected a constant. (or why netlogo expects a constant) Here is the relevant piece of code: extensions [table] streets-own [linkspeed linktoll] nodes-own [netw] ;; In another piece of code linkspeed is assigned successfully to the links to cheapcalc ;; start conditions set costs very high 300000 ;; state 3 unsearched state 2 searching state 1 searched (for later purposes) ask nodes [ set i 0 set j count nodes set netw table:make while [i < j][ table:put netw (i) [3000000 3] set i (i + 1)]] set i 0 let k 0 ask node 35 ;; here i use node 35 as an example. ;; node 35 is connected to node 34, 36, 20 and 50 [table:put netw (35) [0 1] ;; node need to search costs to travel to itself ;; putting constants is ok. while [i < j] [ask my-links [ask both-ends [if (who != 35) [set color blue ;; set temp ([linkspeed] of street 35 who) ;; here my real goal is to put this in stat of i. but i is easier than linkspeed. table:put netw (who) [ i 2 ] ] ] ] set i (i + 1)] ] ;; next node for later, no it is just repetition of the same. end I hope somebody knows what is going on... Kind regards, Chantal

    Read the article

  • PowerShell PSCX Read-Archive: Cannot bind parameter... problem

    - by Robert
    I'm running across a problem I can't seem to wrap my head around using the Read-Archive cmdlet available via PowerShell Community Extensions (v2.0.3782.38614). Here is a cut down sample used to exhibit the problem I'm running into: $mainPath = "p:\temp" $dest = Join-Path $mainPath "ps\CenCodes.zip" Read-Archive -Path $dest -Format zip Running the above produces the following error: Read-Archive : Cannot bind parameter 'Path'. Cannot convert the "p:\temp\ps\CenCodes.zip" value of type "System.String" to type "Pscx.IO.PscxPathInfo". At line:3 char:19 + Read-Archive -Path <<<< $dest -Format zip + CategoryInfo : InvalidArgument: (:) [Read-Archive], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Pscx.Commands.IO.Compression.ReadArchiveCommand If I do not use Join-Path to build the path passed to Read-Archive it works, as in this example: $mainPath = "p:\temp" $path = $mainPath + "\ps\CenCodes.zip" Read-Archive -Path $path -Format zip Output from above: ZIP Folder: CenCodes.zip#\ Index LastWriteTime Size Ratio Name ----- ------------- ---- ----- ---- 0 6/17/2010 2:03 AM 3009106 24.53 % CenCodes.xls Even more confusing is if I compare the two variables passed as the Path argument in the two Read-Archive samples above, they seem identical: This... Write-Host "dest=$dest" Write-Host "path=$path" Write-Host ("path -eq dest is " + ($dest -eq $path).ToString()) Outputs... dest=p:\temp\ps\CenCodes.zip path=p:\temp\ps\CenCodes.zip path -eq dest is True Anyone have any ideas as to why the first sample gripes but the second one works fine?

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • How to register application for existing file types using WiX installer?

    - by Marek
    related to this: http://stackoverflow.com/questions/138550/how-to-register-file-types-extensions-with-a-wix-installer but not a duplicate. I need to handle existing file types (.jpg files). I do not want to be the default handler for .jpg, I would just like to extend the "Open with" menu with a link to my app. I see HKCR\.jpg\OpenWithList\ and HKCR\.jpg\OpenWithProgIds\ in the registry but I am not sure whether to write to these and how to do it correctly with WiX. Should I use something like this? <ProgId Id='??what here?' Description='Jpeg handled by my App'> <Extension Id='jpg' ContentType='image/jpeg'> <Verb Id='openwithmyapp' Sequence='10' Command='OpenWithMyApp' Target='[!FileId]' Argument='"%1"' /> </Extension> </ProgId> There are many ways how to fail here (like Photo Mechanics did, the HKCR for image file types is a real mess after I have installed this software) How to do this correctly with WiX?

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • Manipulating / Resizing / Scaling an image in vb.net

    - by Christian Payne
    Imagine I have a rectangle say 400px x 300px. Then let’s say I want to load an image in that. All of this is very easy using Sytem.Drawing.DrawImage. But then I want to leave the left hand side as 300px but change the right hand side to 250 px. I can draw the box using 4 DrawLines but I don’t know how to squash the image into the new shape. I want the right hand side of the shape to be 250, the left size 300 and the top and bottom 400px. I can’t use DrawImage as it expects the left and right sizes to be the same. Is there a way to manipulate the image into the new shape? I've looked at other questions, but they only apply where the left and right hand side is equal. Any thoughts on how to squash an image into a shape which did not have parallel sides? (If it helps, I'm happy to sacrifice image quality to fit the right shape.)

    Read the article

  • Internet explorer 8 opens file in browser instead of the client

    - by Rogier
    Our company is working with a great Business Intelligence tool CorVu 4.2 to analyse the operational and strategic data. Since several years we are successfully working with Sharepoint 2007 to collaborate and share information with colleagues. Most of my colleagues are working with Internet Explorer 7, but step by step Internet Explorer 8 is implemented in the company. We share a lot of CorVu files thought Sharepoint, but since we are using Internet Explorer 8, we have a problem that is new for us. If we click on a CorVu file in Internet Explorer 8 (not necessarily in Sharepoint) a pop-up shows how to open the file, if we save the file, there is no problem. But if we open the file, the file is shown in the browser and not in the CorVu client! See the screenshot below: link (I removed some unnecessary information) So far my colleagues accept this 'feature' in Internet Explorer 8. But I we open and closes more CorVu files, multiple errors (more than 10) show up starting with: (unable to place more hyperlinks) By pressing Enter the errors disappear, but it's not professional! I contacted the creators of CorVu, but they don't have a solution for in their client. There may be a solution in Internet Explorer 8? The extensions of a CorVu file can be a .sqy, .tab or .qrp. But is it possible to force the files to open in the standard client instead of the browser?

    Read the article

  • Manage multiple UDP calls

    - by rayman
    Hi all, I would like to have an advice for this issue: I am using Jbos 5.1.0, EJB3.0 I have system, which sending requests via UDP'S to remote modems, and suppose to wait for an answer from the target modem. the remote modems support only UDP calls, therefor I o design asynchronous mechanism. (also coz I want to request X modems parallel) this is what I try to do: all calls are retrieved from Data Base, then each call will be added as a message to JMS QUE. let's say i will set X MDB'S on that que, so I can work asynchronous. now each MDB will send UDP request to the IP-address(remote modem) which will be parsed from the que message. so basicly each MDB, which takes a message is sending a udp request to the remote modem and [b]waiting [/b]for an answer from that modem. [u]now here is the BUG:[/u] could happen a scenario where MDB will get an answer, but not from the right modem( which it requested in first place). that bad scenario cause two wrong things: a. the sender which sent the message will wait forever since the message never returned to him(it got accepted by another MDB). b. the MDB which received the message is not the right one, and probablly if it was on a "listener" mode, then it supposed to wait for an answer from diffrent sender.(else it wouldnt get any messages) so ofcourse I can handle everything with a RETRY mechanisem. so both mdb's(the one who got message from the wrong sender, and the one who never got the answer) will try again, to do thire operation with a hope that next time it will success. This is the mechanism, mybe you could tell me if there is any design pattren, or any other effective solution for this problem? Thanks, ray.

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • jquery touch punch - draggable on ipad

    - by dshuta
    i am starting to work with the jquery touch punch extensions in order to allow draggability on ipad, but i am getting tripped up right away. probably something terribly dumb on my part. the draggable example from the developer works fine on my ipad: http://furf.com/exp/touch-punch/draggable.html but not for me: http://danshuta.com/touchpunch/ this works fine in my desktop browser, but on the ipad it just focuses on the block and scrolls the entire page as i drag, as if it were just an image or other normal embedded object. as this is what happens normally with jquery/ui on ipad, this makes me think it is not loading or otherwise ignoring the "punch" code from my site (though if i host the jquery files on my site via the same path, those load and function fine in desktop browser). here's the entire code, very basic: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Touchpunchtest</title> <script src="http://code.jquery.com/jquery.min.js"></script> <script src="http://code.jquery.com/ui/1.8.17/jquery-ui.min.js"></script> <script src="js/jquery.ui.touch-punch.js"></script> </head> <body> <div id="draggybox" onclick="void(0)" style="width: 150px; height: 150px; background: green;"></div> <script>$('#draggybox').draggable();</script> </body> </html> what am i missing?!

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • Is it legal to extend the Class class?

    - by spiralganglion
    I've been writing a system that generates some templates, and then generates some objects based on those templates. I had the idea that the templates could be extensions of the Class class, but that resulted in some magnificent errors: VerifyError: Error #1107: The ABC data is corrupt, attempt to read out of bounds. What I'm wondering is if subclassing Class is even possible, if there is perhaps some case where doing this would be appropriate, and not just a gross misuse of OOP. I believe it should be possible, as ActionScript allows you to create variables of the Class type. This use is described in the LiveDocs entry for Class, yet I've seen no mentions of subclassing Class. Here's a pseudocode example: class Foo extends Class var a:Foo = new Foo(); trace(a is Class) // true, right? var b = new a(); I have no idea what the type of b would be. In summary: can you subclass the Class class? If so, how can you do it without errors, and what type are the instances of the instances of the Class subclass?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Associate "Code/Properties/Stuff" with Fields in C# without reflection. I am too indoctrinated by J

    - by AlexH
    I am building a library to automatically create forms for Objects in the project that I am working on. The codebase is in C#, and essentially we have a HUGE number of different objects to store information about different things. If I send these objects to the client side as JSON, it is easy enough to programatically inspect them to generate a form for all of the properties. The problem is that I want to be able to create a simple way of enforcing permissions and doing validation on the client side. It needs to be done on a field by field level. In javascript I would do this by creating a parallel object structure, which had some sort of { permissions : "someLevel", validator : someFunction } object at the nodes. With empty nodes implying free permissions and universal validation. This would let me simply iterate over the new object and the permissions object, run the check, and deal with the result. Because I am overfamilar with the hammer that is javascript, this is really the only way that I can see to deal with this problem. My first implementation thus uses reflection to let me treat objects as dictionaries, that can be programatically iterated over, and then I just have dictionaries of dictionaries of PermissionRule objects which can be compared with. Very javascripty. Very awkward. Is there some better way that I can do this? Essentially a way to associate a data set with each property, and then iterate over those properties. Or else am I Doing It Wrong?

    Read the article

  • Cannot run Python script on Windows with output redirected??

    - by Wai Yip Tung
    This is running on Windows 7 (64 bit), Python 2.6 with Win32 Extensions for Python. I have a simple script that just print "hello world". I can launch it with python hello.py. In this case I can redirect the output to a file. But if I run it by just typing hello.py on the command line and redirect the output, I get an exception. C:> python hello.py hello world C:> python hello.py >output C:> type output hello world C:> hello.py hello world C:> hello.py >output close failed in file object destructor: Error in sys.excepthook: Original exception was: I think I first get this error after upgrading to Windows 7. I remember it should work in XP. I have seen people talking about this bug python-Bugs-1012692 | Can't pipe input to a python program. But that was long time ago. And it does not mention any solution. Have anyone experienced this? Anyone can help?

    Read the article

  • Are finalizers ever allowed to call other managed classes' methods?

    - by romkyns
    I used to be pretty sure the answer is "no", as explained in Overriding the Finalize method and Object.Finalize documentation. However, while randomly browsing through FileStream in Reflector, I found that it can actually call just such a method from a finalizer: private SafeFileHandle _handle; ~FileStream() { if (this._handle != null) { this.Dispose(false); } } protected override void Dispose(bool disposing) { try { ... } finally { if ((this._handle != null) && !this._handle.IsClosed) // <=== HERE { this._handle.Dispose(); // <=== AND HERE } [...] } } I started wondering whether this will always work due to the exact way in which it's written, and hence whether the "do not touch managed classes from finalizers" is just a guideline that can be broken given a good reason and the necessary knowledge to do it right. I dug a bit deeper and found out that the worst that can happen when the "rule" is broken is that the managed object being accessed had already been finalized, or may be getting finalized in parallel on a separate thread. So if the SafeFileHandle's finalizer didn't do anything that would cause a subsequent call to Dispose fail then the above should be fine... right? Question: so there might after all be situations in which a method on another managed class may be called reliably from a finalizer? I've always believed this to be false, but this code suggests that it's possible and that there can be good enough reasons to do it. Bonus: Observe that the SafeFileHandle will not even know it's being called from a finalizer, since this is just a normal call to Dispose(). The base class, SafeHandle, actually has two private methods, InternalDispose and InternalFinalize, and in this case InternalDispose will be called. Isn't this a problem? Why not?...

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >