Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 234/3660 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • Javascript BBCode Parser recognizes only first list element

    - by nolandark
    I have a really simple Javascript BBCode Parser for client-side live preview (don't want to use Ajax for that). The problem ist, this parser only recognizes the first list element: function bbcode_parser(str) { search = new Array( /\[b\](.*?)\[\/b\]/, /\[i\](.*?)\[\/i\]/, /\[img\](.*?)\[\/img\]/, /\[url\="?(.*?)"?\](.*?)\[\/url\]/, /\[quote](.*?)\[\/quote\]/, /\[list\=(.*?)\](.*?)\[\/list\]/i, /\[list\]([\s\S]*?)\[\/list\]/i, /\[\*\]\s?(.*?)\n/); replace = new Array( "<strong>$1</strong>", "<em>$1</em>", "<img src=\"$1\" alt=\"An image\">", "<a href=\"$1\">$2</a>", "<blockquote>$1</blockquote>", "<ol>$2</ol>", "<ul>$1</ul>", "<li>$1</li>"); for (i = 0; i < search.length; i++) { str = str.replace(search[i], replace[i]); } return str;} [list] [*] adfasdfdf [*] asdfadsf [*] asdfadss [/list] only the first element is converted to a HTML List element, the rest stays as BBCode: adfasdfdf [*] asdfadsf [*] asdfadss I tried playing around with "\s", "\S" and "\n" but I'm mostly used to PHP Regex and totally new to Javascript Regex. Any suggestions?

    Read the article

  • How to protect copyright on large web applications?

    - by Saif Bechan
    Recently I have read about Myows, described as "the universal copyright management and protection app for smart creatives". It is used to protect copyright and more. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so new. Is protecting copyright a good idea for a large web application? If so, do you think Myows will be worth using, or are there better ways to achieve that? Update: Wow, there is no better person to have answered this question like the creator himself. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • Looking for advice on importing large dataset in sqlite and Cocoa/Objective-C

    - by jluckyiv
    I have a fairly large hierarchical dataset I'm importing. The total size of the database after import is about 270MB in sqlite. My current method works, but I know I'm hogging memory as I do it. For instance, if I run with Zombies, my system freezes up (although it will execute just fine if I don't use that Instrument). I was hoping for some algorithm advice. I have three hierarchical tables comprising about 400,000 records. The highest level has about 30 records, the next has about 20,000, the last has the balance. Right now, I'm using nested for loops to import. I know I'm creating an unreasonably large object graph, but I'm also looking to serialize to JSON or XML because I want to break up the records into downloadable chunks for the end user to import a la carte. I have the code written to do the serialization, but I'm wondering if I can serialize the object graph if I only have pieces in memory. Here's pseudocode showing the basic process for sqlite import. I left out the unnecessary detail. [database open]; [database beginTransaction]; NSArray *firstLevels = [[FirstLevel fetchFromURL:url retain]; for (FirstLevel *firstLevel in firstLevels) { [firstLevel save]; int id1 = [firstLevel primaryKey]; NSArray *secondLevels = [[SecondLevel fetchFromURL:url] retain]; for (SecondLevel *secondLevel in secondLevels) { [secondLevel saveWithForeignKey:id1]; int id2 = [secondLevel primaryKey]; NSArray *thirdLevels = [[ThirdLevel fetchFromURL:url] retain]; for (ThirdLevel *thirdLevel in thirdLevels) { [thirdLevel saveWithForeignKey:id2]; } [database commit]; [database beginTransaction]; [thirdLevels release]; } [secondLevels release]; } [database commit]; [database release]; [firstLevels release];

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • importing a large txt file in MySQL ?

    - by Taz
    Hi I am loading a text data in MySQL using the following command 'mysql> Load Data local Infile 'C:\\Documents and Settings\\Scan\\My Documents\\D ownloads\\instance_types_en.nt\\Copy of instance_types_en.txt' into table dbpedi aentities.resources fields terminated by ' ' lines terminated by 'rn';' Data is like (actually there is a newline after '.') <a> <b> <c> . <a> <b> <c> . <a> <b> <c> . <a> <b> <c> .<a> <b> <c> . <a> <b> <c> . Table has and auto increment ID field and then text fields for all three values. File size is about 750MB The problems are 1. appears to be in first text field 2. only 2MB data is imported

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • Windows.Forms RichTextBox Control - Avoid inserting large data.

    - by SchlaWiener
    I have a Windows Form with a RichTextBox on it. The content of the RichTextBox is written to a database field that ist limited to 64k data. For my purpose that is way more than enough text to store. I have set the MaxLength property to avoid insertng more data than allowed. rtcControl.MaxLength = 65536 Howevery, that only restricts the amount of characters that so is allowed to put in the text. But with the formatting overhead from the Rtf I can type more text than I should be allowed to. It even get's worse if I insert a large image, which dosn't increase the TextLength at all but the Rtf Length grows quite a lot. At the moment I check the Length of the richttextboxes' Rtf property in the FormClosing event and display a message to the user if it's to large. However that is just a workaround because I want to disallow putting more data than allowed into the control (like in a textbox if you exceed the MaxLength property nothing is inserted into the control and you hear the default beep(). Any ideas how to achive this? I already tried: using a custom control which extends the richtextbox and shadows th Rtf property to intercept the insertation. But it seems it isn't executed if I add text. Even the TextChanged Event does not fire if I type smth. in the control.

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • Comparing dicts and update a list of result

    - by lmnt
    Hello, I have a list of dicts and I want to compare each dict in that list with a dict in a resulting list, add it to the result list if it's not there, and if it's there, update a counter associated with that dict. At first I wanted to use the solution described at http://stackoverflow.com/questions/1692388/python-list-of-dict-if-exists-increment-a-dict-value-if-not-append-a-new-dict but I got an error where one dict can not be used as a key to another dict. So the data structure I opted for is a list where each entry is a dict and an int: r = [[{'src': '', 'dst': '', 'cmd': ''}, 0]] The original dataset (that should be compared to the resulting dataset) is a list of dicts: d1 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} d2 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'} d3 = {'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'} d4 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} o = [d1, d2, d3, d4] The result should be: r = [[{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'}, 2], [{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'}, 1], [{'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'}, 1]] What is the best way to accomplish this? I have a few code examples but none is really good and most is not working correctly. Thanks for any input on this! UPDATE The final code after Tamås comments is: from collections import namedtuple, defaultdict DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') d2 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd2') d3 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') d4 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') ds = d1, d2, d3, d4 r = defaultdict(int) for d in ds: r[d] += 1 print "list to compare" for d in ds: print d print "result after merge" for k, v in r.iteritems(): print("%s: %s" % (k, v))

    Read the article

  • How do you refactor a large messy codebase?

    - by Ricket
    I have a big mess of code. Admittedly, I wrote it myself - a year ago. It's not well commented but it's not very complicated either, so I can understand it -- just not well enough to know where to start as far as refactoring it. I violated every rule that I have read about over the past year. There are classes with multiple responsibilities, there are indirect accesses (I forget the technical term - something like foo.bar.doSomething()), and like I said it is not well commented. On top of that, it's the beginnings of a game, so the graphics is coupled with the data, or the places where I tried to decouple graphics and data, I made the data public in order for the graphics to be able to access the data it needs... It's a huge mess! Where do I start? How would you start on something like this? My current approach is to take variables and switch them to private and then refactor the pieces that break, but that doesn't seem to be enough. Please suggest other strategies for wading through this mess and turning it into something clean so that I can continue where I left off!

    Read the article

  • Git repository gets corrupted when I do a large commit: "Possible repository corruption on the remot

    - by mindthief
    Hi All, A friend of mine and I have been trying to use git for a project. It is hosted on his server, and I git clone it as: git clone [email protected]:/path/to/git/repos.git Pretty standard stuff, and it works great for a while. But every time one of us has added a large commit (which git supposedly handles very well), of the order of 100MB or so, the git repository gets kind of broken. Basically, at this point I will be able to push new changes and pull other changes (I think), but when I try to clone the repository in a fresh location using that command above, I get an error message that says: $git clone [email protected]:/path/to/git/repos.git Initialized empty Git repository in /local/path/to/repos/.git/ remote: Counting objects: 1455, done. remote: Compressing objects: 100% (1235/1235), done. error: git upload-pack: git-pack-objects died with error.s fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed This has happened 3 or 4 times now, and it's always when I add a large commit. Any idea why this is happening? How can we fix it? We're both using Mac OSX Snow Leopard. Thanks! -M

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • WCF - Contract Name could not be found in the list of contracts

    - by user208662
    Hello, I am relatively new to WCF. However, I need to create a service that exposes data to both Silverlight and AJAX client applications. In an attempt to accomplish this, I have created the following service to serve as a proof of concept: [ServiceContract(Namespace="urn:MyCompany.MyProject.Services")] public interface IJsonService { [OperationContract] [WebInvoke(Method = "GET", RequestFormat=WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] List<String> JsonFindNames(); } [ServiceContract(Namespace="urn:MyCompany.MyProject.Services")] public interface IWsService { [OperationContract(Name="FindNames")] List<String> WsFindNames(); } [ServiceBehavior(Name="myService", Namespace="urn:MyCompany.MyProject.Services")] public class myService : IJsonService, IWsService { public List<String> JsonFindNames() { return FindNames(); } public List<String> WsFindNames() { return FindNames(name); } public List<string> FindNames() { List<string> names = List<string>(); names.Add("Alan"); names.Add("Bill"); return results; } } When I try to access this service, I receive the following error: The contract name 'myService' could not be found in the list of contracts implemented by the service 'myService'. What is the cause of this? How do I fix this? Thank you

    Read the article

  • Including/Organzing HTML in large javascript project

    - by Bill Zimmerman
    Hi, I've a got a fairly large web app, with several mini applets on each page. These applets are almost always identical jquery apps. I am looking for advice on how I should organize/include smaller parts of these jquery apps within my larger project. For example, each app has several independent tabs. If possible, I would like to store each of the tabs as a seperate .html file because this makes development easier. My requirements are: 1) All of the html 'tabs' are loaded on the clients end when the page loads. I would like to avoid any delays by dynamically requesting the tab html. 2) If possible, I would like to minimize the raw data sent. For example, it would be preferable to send each tab 1 time, instead of sending each tab 10 times if there are ten applets on that page. Questions: 1) What are my options for 'including' the HTML files / javascript code 2) Any tips for keeping my development simple in this situation? Surely there has to be a better way than just editing one massive html file when working with large pages.

    Read the article

  • Loading/Displaying large amount of data on webpage.

    - by jb
    I have a webpage which contains a table for displaying a large amount of data (on average from 2,000 to 10,000 rows). This page takes a long time to load/render. Which is understandable. The problem is, while the page is loading the PCs memory usage skyrockets (500mb on my test system is in use by iexplorer) and the whole PC grinds to a halt until it has finished, which can take a minute or two. IE hangs until it is complete, switching to another running program is the same. I need to fix this - and ideally i want to accomplish 2 things: 1) Load individual parts of the page seperately. So the page can render initially without the large data table. A loading div will be placed there until it is ready. 2) Dont use up so much memory or local resources while rendering - so at least they can use a different tab/application at the same time. How would I go about doing both or either of these? I'm an applications programmer by trade so i am still a little fizzy on the things I can do in a web environment. Cheers all.

    Read the article

  • MVC2 Modelbinder for List of derived objects

    - by user250773
    I want a list of different (derived) object types working with the Default Modelbinder in Asp.net MVC 2. I have the following ViewModel: public class ItemFormModel { [Required(ErrorMessage = "Required Field")] public string Name { get; set; } public string Description { get; set; } [ScaffoldColumn(true)] //public List<Core.Object> Objects { get; set; } public ArrayList Objects { get; set; } } And the list contains objects of diffent derived types, e.g. public class TextObject : Core.Object { public string Text { get; set; } } public class BoolObject : Core.Object { public bool Value { get; set; } } It doesn't matter if I use the List or the ArrayList implementation, everything get's nicely scaffolded in the form, but the modelbinder doesn't resolve the derived object type properties for me when posting back to the ActionResult. What could be a good solution for the Viewmodel structure to get a list of different object types handled? Having an extra list for every object type (e.g. List, List etc.) seems to be not a good solution for me, since this is a lot of overhead both in building the viewmodel and mapping it back to the domain model. Thinking about the other approach of binding all properties in a custom model binder, how can I make use the data annotations approach here (validating required attributes etc.) without a lot of overhead?

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • PHP: problem rendering large images (error 321)

    - by JP19
    Hi ... its me again with a php problem :) Following is part of my PHP script which is rendering JPEG images. ... $tf=$requested_file; $image_type="jpeg"; header("Content-type: image/${image_type}"); $CMD="\$image=imagecreatefrom${image_type}('$tf'); image${image_type}(\$image);"; eval($CMD); exit; ... There is no syntactical error, because above code is working fine for small images, but for large images, it gives: Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error. in the browser. To be sure, I created two images using imagemagick from same source image - one resized to 10% of original and other 90%. http://mostpopularsports.net/images/misc/ttt10.jpg works http://mostpopularsports.net/images/misc/ttt90.jpg gives Error 301 in the browser. There is a related question with solution posted by OP here Error writing content through Apache. but I cannot understand how to make the fix. Can someome help me with it? I have looked at the headers in Chrome. For the first request, everything is fine. For the second request - the request headers are all garbled. Both images are jpeg (as they are created from imagemagick. But still to be sure I checked): misc/ttt10.jpg: JPEG image data, JFIF standard 1.01 misc/ttt90.jpg: JPEG image data, JFIF standard 1.01 Finally, the way I fixed is, remove the Transfer-Encoding: chunked header from the response. [This header was sent by apache only when the data was large enough]. (I had an internal proxy, so did it in the proxy script - otherwise one may need to do it in apache settings). There were some good answers and I have selected the one that helped me solve the problem best. thanks JP

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >