Search Results

Search found 10417 results on 417 pages for 'large'.

Page 326/417 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • What are the limitations of the .NET Assembly format?

    - by McKAMEY
    We just ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! So my question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous. Less disk to hit if it is all pregen'd stuff.

    Read the article

  • How to preprocess text to do OCR error correction

    - by eaglefarm
    Here is what I'm trying to accomplish: I need to get a several large text files from a computer that is not networked and has no other output except a printer. I tried printing the text, then scanning the printout with OCR to recover the text on another computer but the OCR gets lots of errors (1 vs l, o vs 0, O vs D, etc). To solve this I am thinking of writing a program to process (annotate?) the text file, before printing it, so that the errors can be corrected from the text output of the OCR program. For example, for 1 (number one) vs l (letter L), I could change the text like this: sample inserting \nnn after characters that are frequently wrong in the OCR results: sampl\108e Then I can write another program to examine the file, looking for \nnn and check the character before the \nnn (where nnn is the ascii code in decimal) and fix it if necessary. Of course the program will have to recognize that the \nnn may have errors too but at least it knows that the nnn are digits and can easily correct them. I think I would add a CRC on each line so that any line that isn't corrected perfectly can be flagged as having a problem. Has anyone done anything like this? If there is an existing way of doing this I'd rather not reinvent the wheel. Or any suggestions for annotation format that would help solve this problem would be helpful too.

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • BackgroundWorker From ASP.Net Application

    - by Kevin
    We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information. I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this. For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours. Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing? So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach? Thanks and sorry for such a long post! Kevin

    Read the article

  • C# Setting Properties using Index

    - by Guazz
    I have a business class that contains many properties for various stock-exchange price types. This is a sample of the class: public class Prices { public decimal Today {get; set} public decimal OneDay {get; set} public decimal SixDay {get; set} public decimal TenDay {get; set} public decimal TwelveDay {get; set} public decimal OneDayAdjusted {get; set;} public decimal SixDayAdjusted {get; set;} public decimal TenDayAdjusted {get; set;} public decimal OneHundredDayAdjusted {get; set;} } I have a legacy system that supplies the prices using string ids to identify the price type. E.g. Today = "0D" OneDay = "1D" SixDay = "6D" //..., etc. Firstly, I load all the values to an IDictionary() collection so we have: [KEY] VALUE [0D] = 1.23456 [1D] = 1.23456 [6D] = 1.23456 ...., etc. Secondly, I set the properties of the Prices class using a method that takes the above collection as a parameter like so: SetPricesValues(IDictionary<string, decimal> pricesDictionary) { // TODAY'S PRICE string TODAY = "D0"; if (true == pricesDictionary.ContainsKey(TODAY)) { this.Today = pricesDictionary[TODAY]; } // OneDay PRICE string ONE_DAY = "D1"; if (true == pricesDictionary.ContainsKey(ONE_DAY)) { this.OneDay = pricesDictionary[ONE_DAY]; } //..., ..., etc., for each other property } Is there a more elegant technique to set a large amount of properties? Thanks, j

    Read the article

  • Poor Ruby on Rails performance when using nested :include

    - by Jeremiah Peschka
    I have three models that look something like this: class Bucket < ActiveRecord::Base has_many :entries end class Entry < ActiveRecord::Base belongs_to :submission belongs_to :bucket end class Submission < ActiveRecord::Base has_many :entries belongs_to :user end class User < ActiveRecord::Base has_many :submissions end When I retrieve a collection of entries doing something like: @entries = Entry.find(:all, :conditions => ['entries.bucket_id = ?', @bucket], :include => :submission) The performance is pretty quick although I get a large number of extra queries because the view uses the Submission.user object. However, if I add the user to the :include statement, the performance becomes terrible and it takes over a minute to return a total of 50 entries and submissions spread across 5 users. When I run the associated SQL commands, they complete in well under a second. @entries = Entry.find(:all, :conditions => ['entries.bucket_id = ?', @bucket], :include => {:submission => :user}) Why would this second command have such terrible performance compared to the first?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Business Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The result has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others

    Read the article

  • Create a unique ID by fuzzy matching of names (via agrep using R)

    - by tbrambor
    Using R, I am trying match on people's names in a dataset structured by year and city. Due to some spelling mistakes, exact matching is not possible, so I am trying to use agrep() to fuzzy match names. A sample chunk of the dataset is structured as follows: df <- data.frame(matrix( c("1200013","1200013","1200013","1200013","1200013","1200013","1200013","1200013", "1996","1996","1996","1996","2000","2000","2004","2004","AGUSTINHO FORTUNATO FILHO","ANTONIO PEREIRA NETO","FERNANDO JOSE DA COSTA","PAULO CEZAR FERREIRA DE ARAUJO","PAULO CESAR FERREIRA DE ARAUJO","SEBASTIAO BOCALOM RODRIGUES","JOAO DE ALMEIDA","PAULO CESAR FERREIRA DE ARAUJO"), ncol=3,dimnames=list(seq(1:8),c("citycode","year","candidate")) )) The neat version: citycode year candidate 1 1200013 1996 AGUSTINHO FORTUNATO FILHO 2 1200013 1996 ANTONIO PEREIRA NETO 3 1200013 1996 FERNANDO JOSE DA COSTA 4 1200013 1996 PAULO CEZAR FERREIRA DE ARAUJO 5 1200013 2000 PAULO CESAR FERREIRA DE ARAUJO 6 1200013 2000 SEBASTIAO BOCALOM RODRIGUES 7 1200013 2004 JOAO DE ALMEIDA 8 1200013 2004 PAULO CESAR FERREIRA DE ARAUJO I'd like to check in each city separately, whether there are candidates appearing in several years. E.g. in the example, PAULO CEZAR FERREIRA DE ARAUJO PAULO CESAR FERREIRA DE ARAUJO appears twice (with a spelling mistake). Each candidate across the entire data set should be assigned a unique numeric candidate ID. The dataset is fairly large (5500 cities, approx. 100K entries) so a somewhat efficient coding would be helpful. Any suggestions as to how to implement this?

    Read the article

  • Finding distance to the closest point in a point cloud on an uniform grid

    - by erik
    I have a 3D grid of size AxBxC with equal distance, d, between the points in the grid. Given a number of points, what is the best way of finding the distance to the closest point for each grid point (Every grid point should contain the distance to the closest point in the point cloud) given the assumptions below? Assume that A, B and C are quite big in relation to d, giving a grid of maybe 500x500x500 and that there will be around 1 million points. Also assume that if the distance to the nearest point exceds a distance of D, we do not care about the nearest point distance, and it can safely be set to some large number (D is maybe 2 to 10 times d) Since there will be a great number of grid points and points to search from, a simple exhaustive: for each grid point: for each point: if distance between points < minDistance: minDistance = distance between points is not a good alternative. I was thinking of doing something along the lines of: create a container of size A*B*C where each element holds a container of points for each point: define indexX = round((point position x - grid min position x)/d) // same for y and z add the point to the correct index of the container for each grid point: search the container of that grid point and find the closest point if no points in container and D > 0.5d: search the 26 container indices nearest to the grid point for a closest point .. continue with next layer until a point is found or the distance to that layer is greater than D Basically: put the points in buckets and do a radial search outwards until a points is found for each grid point. Is this a good way of solving the problem, or are there better/faster ways? A solution which is good for parallelisation is preferred.

    Read the article

  • Search for string allowing for one mismatches in any location of the string, Python

    - by Vincent
    I am working with DNA sequences of length 25 (see examples below). I have a list of 230,000 and need to look for each sequence in the entire genome (toxoplasma gondii parasite) I am not sure how large the genome is but much more that 230,000 sequences. I need to look for each of my sequences of 25 characters example(AGCCTCCCATGATTGAACAGATCAT). The genome is formatted as a continuous string ie (CATGGGAGGCTTGCGGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTTGCGGAGTGCGGAGCCTGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTT.........) I don't care where or how many times it is found, just yes or no. This is simple I think, str.find(AGCCTCCCATGATTGAACAGATCAT) But I also what to find a close match defined as wrong(mismatched) at any location but only 1 location and record the location in the sequnce. I am not sure how do do this. The only thing I can think of is using a wildcard and performing the search with a wildcard in each position. ie search 25 times. For example AGCCTCCCATGATTGAACAGATCAT AGCCTCCCATGATAGAACAGATCAT close match with a miss-match at position 13 Speed is not a big issue I am only doing it 3 times. i hope but it would be nice it was fast. The are programs that do this find matches and partial matches but I am looking for a type of partial match that is not available with these applications. Here is a similar post for pearl but they are only comparing sequnces not searching a continuous string Related post

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • Amazon S3 and swfaddress

    - by justinbach
    I recently migrated a large AS3 site (lots of swfs, lots of flvs) to Amazon S3. Pretty much everything but HTML and JS files is being stored/served from Amazon, and it's working well. The only problem I'm having is that I built the site using SWFaddress (actually, via the Gaia framework which uses SWFaddress), and for some reason, SWFaddress is no longer updating the address bar correctly as users navigate from page to page. In other words, the URL persistently remains http://www.mysite.com, not http://www.mysite.com/#/section as would be the case were SWFaddress functioning correctly (and as it was functioning prior to the migration). Stranger yet, if I go to (e.g.) http://www.mysite.com/#/section directly, the deeplinking functions as you'd expect--I arrive directly at the correct section. However, navigating away from that section doesn't have any effect on the address bar, despite the fact that it should be dynamically updated. I've got a crossdomain.xml file set up on the site that allows access from all domains, so that's not the issue, and I don't know what else might be. Any ideas would be greatly appreciated! P.S. I integrated S3 by putting pretty much the entire site in an S3 bucket and then just changing the initial swfobject embed to point to the S3 instance of main.swf, passing in the S3 path as the "base" param to the embedded swf so that all dynamically loaded assets and swfs would also be sourced from s3. Dunno if that's related to the troubles I'm having.

    Read the article

  • What do I need to do to make a WPF Browser Application (XBAP) that requires Full Trust work on Windo

    - by Benoit J. Girard
    So this is a Visual Studio 2008, .NET, WPF, XBAP, Windows 7 question, regarding .NET trust policies. At work, we have several Web Browser Applications (.XBAP files) developed with Visual Studio 2008 (so .NET 3.5) that we deployed internally. These required a .NET FullTrust policy, we found a way to make a .MSI that adjusted the policy on individual stations, everything worked great. Users love in-browser apps. This was last year and on Windows XP. This year our company started upgrading users to Windows 7, and now none of our Web Browser Applications work. The error message is "Trust Not Granted", as if the policy-changing .MSI had not been run. Other details: I can confirm that our apps work on Windows XP for Internet Explorer 7 and Firefox, and do not work on Windows 7 for Internet Explorer 8 nor Firefox. I must admit that .NET security policies mystify me. Still, I could not find any mention of this problem on the Net at large or on this site. Did anybody else encounter this problem? Any and all help welcome.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • Autodetect Presence of CSV Headers in a File

    - by banzaimonkey
    Short question: How do I automatically detect whether a CSV file has headers in the first row? Details: I've written a small CSV parsing engine that places the data into an object that I can access as (approximately) an in-memory database. The original code was written to parse third-party CSV with a predictable format, but I'd like to be able to use this code more generally. I'm trying to figure out a reliable way to automatically detect the presence of CSV headers, so the script can decide whether to use the first row of the CSV file as keys / column names or start parsing data immediately. Since all I need is a boolean test, I could easily specify an argument after inspecting the CSV file myself, but I'd rather not have to (go go automation). I imagine I'd have to parse the first 3 to ? rows of the CSV file and look for a pattern of some sort to compare against the headers. I'm having nightmares of three particularly bad cases in which: The headers include numeric data for some reason The first few rows (or large portions of the CSV) are null There headers and data look too similar to tell them apart If I can get a "best guess" and have the parser fail with an error or spit out a warning if it can't decide, that's OK. If this is something that's going to be tremendously expensive in terms of time or computation (and take more time than it's supposed to save me) I'll happily scrap the idea and go back to working on "important things". I'm working with PHP, but this strikes me as more of an algorithmic / computational question than something that's implementation-specific. If there's a simple algorithm I can use, great. If you can point me to some relevant theory / discussion, that'd be great, too. If there's a giant library that does natural language processing or 300 different kinds of parsing, I'm not interested.

    Read the article

  • Invalidating Memcached Keys on save() in Django

    - by Zack
    I've got a view in Django that uses memcached to cache data for the more highly trafficked views that rely on a relatively static set of data. The key word is relatively: I need invalidate the memcached key for that particular URL's data when it's changed in the database. To be as clear as possible, here's the meat an' potatoes of the view (Person is a model, cache is django.core.cache.cache): def person_detail(request, slug): if request.is_ajax(): cache_key = "%s_ABOUT_%s" % settings.SITE_PREFIX, slug # Check the cache to see if we've already got this result made. json_dict = cache.get(cache_key) # Was it a cache hit? if json_dict is None: # That's a negative Ghost Rider person = get_object_or_404(Person, display = True, slug = slug) json_dict = { 'name' : person.name, 'bio' : person.bio_html, 'image' : person.image.extra_thumbnails['large'].absolute_url, } cache.set(cache_key) # json_dict will now exist, whether it's from the cache or not response = HttpResponse() response['Content-Type'] = 'text/javascript' response.write(simpljson.dumps(json_dict)) # Make sure it's all properly formatted for JS by using simplejson return response else: # This is where the fully templated response is generated What I want to do is get at that cache_key variable in it's "unformatted" form, but I'm not sure how to do this--if it can be done at all. Just in case there's already something to do this, here's what I want to do with it (this is from the Person model's hypothetical save method) def save(self): # If this is an update, the key will be cached, otherwise it won't, let's see if we can't find me try: old_self = Person.objects.get(pk=self.id) cache_key = # Voodoo magic to get that variable old_key = cache_key.format(settings.SITE_PREFIX, old_self.slug) # Generate the key currently cached cache.delete(old_key) # Hit it with both barrels of rock salt # Turns out this doesn't already exist, let's make that first request even faster by making this cache right now except DoesNotExist: # I haven't gotten to this yet. super(Person, self).save() I'm thinking about making a view class for this sorta stuff, and having functions in it like remove_cache or generate_cache since I do this sorta stuff a lot. Would that be a better idea? If so, how would I call the views in the URLconf if they're in a class?

    Read the article

  • iPhone UIView Animation Disables UIButton Subview

    - by bensnider
    So I've got a problem with buttons and animations. Basically, I'm animating a view using the UIView animations while also trying to listen for taps on the button inside the view. The view is just as large as the button, and the view is actually a subclass of UIImageView with an image below the button. The view is a subview of a container view placed in Interface Builder with user interaction enabled and clipping enabled. All the animation and button handling is done in this UIImageView subclass, while the startFloating message is sent from a separate class as needed. If I do no animation, the buttonTapped: message gets sent correctly, but during the animation it does not get sent. I've also tried implementing the touchesEnded method, and the same behavior occurs. UIImageView subclass init (I have the button filled with a color so I can see the frame gets set properly, which it does): - (id)initWithImage:(UIImage *)image { self = [super initWithImage:image]; if (self != nil) { // ...stuffs UIButton *tapBtn = [UIButton buttonWithType:UIButtonTypeCustom]; tapBtn.frame = CGRectMake(0, 0, self.frame.size.width, self.frame.size.height); [tapBtn addTarget:self action:@selector(buttonTapped:) forControlEvents:UIControlEventTouchUpInside]; tapBtn.backgroundColor = [UIColor cyanColor]; [self addSubview:tapBtn]; self.userInteractionEnabled = YES; } return self; } Animation method that starts the animation (if I don't call this the button works correctly): - (void)startFloating { [UIView beginAnimations:@"floating" context:nil]; [UIView setAnimationDelegate:self]; [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [UIView setAnimationDuration:10.0f]; self.frame = CGRectMake(self.frame.origin.x, -self.frame.size.height, self.frame.size.width, self.frame.size.height); [UIView commitAnimations]; } So, to be clear: Using the UIView animation effectively disables the button. Disabling the animation causes the button to work. The button is correctly sized and positioned on screen, and moves along with the view correctly.

    Read the article

  • Best Practice for Uploading Many (2000+) Images to A Server

    - by bob
    Hello, I have a general question about this. When you have a gallery, sometimes people need to upload 1000's of images at once. Most likely, it would be done through a .zip file. What is the best way to go about uploading this sort of thing to a server. Many times, server have timeouts etc. that need to be accounted for. I am wondering what kinds of things should I be looking out for and what is the best way to handle a large amount of images being uploaded. I'm guessing that you would allow a user to upload a zip file (assuming the timeout does not effect you), and this zip file is uploaded to a specific directory, lets assume in this case a directory is created for each user in the system. You would then unzip the directory on the server and scan the user's folder for any directories containing .jpg or .png or .gif files (etc.) and then import them into a table accordingly. I'm guessing labeled by folder name. What kind of server side troubles could I run into? I'm aware that there may be many issues. Even general ideas would be could so I can then research further. Thanks! Also, I would be programming in Ruby on Rails but I think this question applies accross any language.

    Read the article

  • Determining which form input failed validation?

    - by Alastair Pitts
    I am designing a creation wizard in ASP.NET MVC 1 and instead of posting back each step, I'm using javascript to toggle the display of the different steps divs. This is a quick sample of the code, just to explain. <% using (Html.BeginForm()) {%> <fieldset> <legend>Fields</legend> <div id="wizardStep1"> <% Html.RenderPartial("CreateStep1", Model); %> </div> <div id="wizardStep2"> <% Html.RenderPartial("CreateStep2", Model); %> </div> <div id="wizardStep3"> <% Html.RenderPartial("CreateStep3", Model); %> </div> </fieldset> <% } %> I have javascript that just toggles the visibility of the divs, with each partial view containing a different section of the input form (which is pretty large by itself) My question is, if the form fails validation and I reload the page with the validation errors, is there a way for me to determine which div contains the error? Either in javascript or other? Failing that, is there a good client-side validation library for MVC 1? Ideally I'd love to move to MVC2 and the client side validation built into that, but I am required to use MVC1

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • Threadsafe binding with DispatcherObject.CheckAccess()

    - by maffe
    Hi, according to this, I can achieve threadsafety with large overhead. I wrote the following class and use it. It works fine. public abstract class BindingBase : DispatcherObject, INotifyPropertyChanged, INotifyPropertyChanging { private string _displayName; private const string NameDisplayName = "DisplayName"; /// /// The display name for the gui element which bound this instance. It can be used for localization. /// public string DisplayName { get { return _displayName; } set { NotifyPropertyChanging(NameDisplayName); _displayName = value; NotifyPropertyChanged(NameDisplayName); } } protected BindingBase() {} protected BindingBase(string displayName) { DisplayName = displayName; } public event PropertyChangedEventHandler PropertyChanged; public event PropertyChangingEventHandler PropertyChanging; protected void NotifyPropertyChanged(string name) { if (PropertyChanged == null) return; if (CheckAccess()) PropertyChanged.Invoke(this, new PropertyChangedEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanged(name))); } protected void NotifyPropertyChanging(string name) { if (PropertyChanging == null) return; if (CheckAccess()) PropertyChanging.Invoke(this, new PropertyChangingEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanging(name))); } } So is there a reason, why I've never found something like that? Are there any issues I should be aware off? Regards

    Read the article

  • jQuery image preload/cache halting browser

    - by Nathan Loding
    In short, I have a very large photo gallery and I'm trying to cache as many of the thumbnail images as I can when the first page loads. There could be 1000+ thumbnails. First question -- is it stupid to try to preload/cache that many? Second question -- when the preload() function fires, the entire browser stops responding for a minute to two. At which time the callback fires, so the preload is complete. Is there a way to accomplish "smart preloading" that doesn't impede on the user experience/speed when attempting to load this many objects? The $.preLoadImages function is take from here: http://binarykitten.me.uk/dev/jq-plugins/107-jquery-image-preloader-plus-callbacks.html Here's how I'm implementing it: $(document).ready(function() { setTimeout("preload()", 5000); }); function preload() { var images = ['image1.jpg', ... 'image1000.jpg']; $.preLoadImages(images, function() { alert('done'); }); } 1000 images is a lot. Am I asking too much?

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >