Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 152/238 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Is there a faster way to draw text?

    - by mystify
    Shark complains about a big performance hit with this line, which takes like 80% of CPU time. I have a counter that is updated very frequently and performance seriously sucks. It's an custom UILabel subclass with -drawRect: implemented. Every time the counter value changes, this is used to draw the new text: [self.text drawInRect:textRect withFont:correctedFont lineBreakMode:self.lineBreakMode alignment:self.textAlignment]; When I comment this line out, performance rocks. Its smooth and fast. So Shark isn't wrong about this. But what could I do to improve this? Maybe go a level deeper? Does that make any sense? Probably drawing text is really so incredible heavy...?

    Read the article

  • Quick to develop web app in Java

    - by Mike Q
    Hi all, I need to develop a basic web app very quickly (1 week) for a demo. My requirements are Java (I need to make use of existing Java libraries to access the relevant data) 2 screens One for static data view, maybe some search parameters Other for basic form entry No fancy AJAX required Ideally easy for a web designer to come in and tart it up as necessary, without having to rewrite everything My first stop was going to be to checkout Wicket as I've heard good things about it. I don't have the time right now to dive into anything heavy, which probably writes off JSF in my mind (I played with JSF1, steep learning curve which I've now slid back down). I'm happy to treat the result as throwaway so if there's a framework which starts of well but then doesn't scale up to bigger projects, that would be ok. Any suggestions appreciated on frameworks/approach.

    Read the article

  • Windows/Samba connection error

    - by Gomibushi
    I have a Linux fileserver serving up /home for linux and windows users. I was able to connect from my windows client, but not from a DC. Then suddenly I could connect from the DC too. The linux servers run Centrify clients, and as such are part of the domain. All on same subnet. This is what the the log.smbd says, repeatedly: [2010/02/11 11:25:57, 0] lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.200.3. Error = Connection reset by peer On Windows it appeared as an "unknown error". EDIT: the error code is "0x80004005". We are developing a system depended on the samba share, and are worried this will appear again. It would be nice to pin point the root of this. Any ideas what this might be? Places to look?

    Read the article

  • How to create an formatted localized string?

    - by mystify
    I have an localized string which needs to take a few variables. However, in localization it is important that the order of the variables can change from language to language. So this is not a good idea: NSString *text = NSLocalizedString(@"My birthday is at %@ %@ in %@", nil); In some languages some words come before others, while in others it's reverse. I lack of an good example for the moment. How would I provide NAMED variables in an formatted string? Is there any way to do it without some heavy self-made string replacements? Even some numbered variables like {%@1}, {%@2}, and so on would be sufficient... is there a solution?

    Read the article

  • How to write a program that mimics Fiddler by using tcpdump or from scratch?

    - by ????
    When Fiddler is not on Mac OS X or Ubuntu, and if we don't install/use Wireshark or any other more heavy duty tools, what is a way to use tcpdump so that 1) It can print out GET /foo/bar HTTP/1.1 [request content in RAW text] [response content in RAW text] POST /foo/... HTTP/1.1 this should be able to be done by tcpdump or by using tcpdump in a short shell script or Ruby / Python / Perl script. 2) Actually, it can be neat if a script can output HTML, with GET /foo/bar HTTP/1.1 POST /foo/... HTTP/1.1 on the page, for any browser to display, and then when clicked on any of those lines, it will expand to show the RAW content like (1) above does. Click again and it will hide the details. The expansion UI can be done using jQuery or any JS library. The script may be short... possibly less than 20 lines? Does anybody know how to do it either for (1) or (2)?

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

  • What are the typical reasons Javascript developed on IE fails on Firefox?

    - by lwburk
    Inspired by this post it occurs to me that I often suffer from the opposite problem. That is, I've got code in a legacy application designed only for Internet Explorer and I need to get it to work in Firefox. For example, I recently worked on an app that made heavy use of manually simulating click events, like this: select.options[0].click(); ...which completely broke the application in Firefox. But you wouldn't find that information in the answers to the other question, because that's not something you'd ever even attempt if your app first targeted Firefox. What other things should a developer updating a legacy IE-only application look for when migrating to modern browsers?

    Read the article

  • preg_replace or regex string translation

    - by ccolon
    I found some partial help but cannot seem to fully accomplish what I need. I need to be able to do the following: I need an regular expression to replace any 1 to 3 character words between two words that are longer than 3 characters with a match any expression: For example: walk to the beach == walk(.*)beach If the 1 to 3 character word is not preceded by a word that's longer than 3 characters then I want to translate that 1 to 3 letter word to ' ?' For example: on the beach == on ?the ?beach The simpler the rule the better (of course, if there's an alternative more complicated version that's more performant then I'll take that as well as I eventually anticipate heavy usage eventually). This will be used in a PHP context most likely with preg_replace. Thus, if you can put it in that context then even better!

    Read the article

  • JAVA bytecode optimization

    - by Idob
    This is a basic question. I have code which shouldn't run on metadata beans. All metadata beans are located under metadata package. Now, I use reflection API to find out whether a class is located in the the metadata package. if (newEntity.getClass().getPackage().getName().contains("metadata")) I use this If in several places within this code. The question is: Should I do this once with: boolean isMetadata = false if (newEntity.getClass().getPackage().getName().contains("metadata")) { isMetadata = true; } C++ makes optimizations and knows that this code was already called and it won't call it again. Does JAVA makes optimization? I know reflection API is a beat heavy and I prefer not to lose expensive runtime.

    Read the article

  • it is possible to "group by" without losing the original rows?

    - by toPeerOrNotToPeer
    i have a query like this: ID | name | commentsCount 1 | mysql for dummies | 33 2 | mysql beginners guide | 22 SELECT ..., commentsCount // will return 33 for first row, 22 for second one FROM mycontents WHERE name LIKE "%mysql%" also i want to know the total of comments, of all rows: SELECT ..., SUM(commentsCount) AS commentsCountAggregate // should return 55 FROM mycontents WHERE name LIKE "%mysql%" but this one obviously returns a single row with the total. now i want to merge these two queries in one single only, because my actual query is very heavy to execute (it uses boolean full text search, substring offset search, and sadly lot more), then i don't want to execute it twice is there a way to get the total of comments without making the SELECT twice? !! custom functions are welcome !! also variable usage is welcome, i never used them...

    Read the article

  • PropertyUtils performance

    - by mR_fr0g
    I have a problem where i need to walk through an object graph and pick out a particular property value. My original solution caches a linked list of property names that need to be applied in order to get from point A to point B in the object graph. I then use apache commons PropertyUtils to iterate through the linked list calling getProperty(Object bean, String name) until i have reached point B. My question is around how this will perform compared to perhaps cahing the Method objects for each step. What is propertyUtils doing under the bonnet? Is it doing a lot of reflection / heavy lifting?

    Read the article

  • Spring 3.0 REST implementation or Jersey?

    - by hnilsen
    Hi, SO! I'm currently trying to figure out which implementation of JSR-311 I'm going to recommend further up the food chain. I've pretty much narrowed it down to two options - Spring 3.0 with it's native support for REST - or use Sun's own Jersey (Restlets might also be an option). To me it doesn't seem to be much of a difference in the actual syntax, but there might be issues with performance that I haven't figured out yet. The service is meant to replace some heavy-duty EJB's and make a RESTful Webservice instead. The load is expected to be rather high, up in the 100k users per day (max) range, but will be seriously load balanced. Thanks for all your insights.

    Read the article

  • Response.Redirect exception

    - by Tedd Hansen
    Executing the line: Response.Redirect("Whateva.aspx", true); Results in: A first chance exception of type 'System.Threading.ThreadAbortException' occurred in mscorlib.dll An exception of type 'System.Threading.ThreadAbortException' occurred in mscorlib.dll but was not handled in user code The exception is because of the "true" part, telling it to end the current request immediately. Is this how it should be? If we consider: Exceptions are generally considered heavy, and many times the reason for ending the request early is to avoid processing the rest of the page. Exceptions show up in performance monitoring, so monitoring the solution will show a false number of exceptions. Is there an alternative way to achieve the same?

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Access static constant variable from multiple threads in C

    - by user325519
    I have some experience with multithread programming under Linux (C/C++ & POSIX threads), however most obvious cases are sometimes very complicated. I have several static constant variables (global and function local) in my code, can I access them simultaneously from multiple threads without using mutexes? Because I don't modify them it should be ok, but it's always better to ask. I have to do heavy speed optimization, so even as fast operations as mutex lock/unlock are quite expensive for me, especially because my application is going to access these variables form long loops.

    Read the article

  • Database implementation question? [closed]

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • Looking for a specific kind of WEB framework, no malarkey please

    - by Hello you all men
    We do maintenence on a number of systems. I'm finally in a place where I'm teh fucking boss for once, and have to design a large system that will have a long maintenance contract. There's a couple of tasks I find myself always repeating: 1) similar tasks for users with JS and those without 2) similar things for contents and rss/atom feeds, etc. To combat these I will need appropriate handling of assets (think JS files, CSS, themes/templates, etc.), excellent auth/user systems, javascript/ajax forethought, appropriate model setups. Codeigniter fails on many of these. Basically, with enough time I could build this system with Zend, but I'm curious what else is out there as Zend is also kind of a heavy-weight. We need something that is Rapid but maintainable, CodeIgniter is not maintainable. We will have a lot of AJAX APIs in place for the design team to play with. At first I thought jQuery was cool, but now I'm looking at Dojo.

    Read the article

  • Split a Large File In C++

    - by wdow88
    Hey all, I'm trying to write a program that takes a large file (of any time) and splits it into many smaller "chunks". I think I have the basic idea down, but for some reason I cannot create a chunk size over 12,000 bites. I know there are a few solutions on google, etc. but I am more interested in learning what the origin of this limitation is then actually using the program to split files. //This file splits are larger into smaller files of a user inputted size. #include<iostream> #include<fstream> #include<string> #include<sstream> #include <direct.h> #include <stdlib.h> using namespace std; void GetCurrentPath(char* buffer) { _getcwd(buffer, _MAX_PATH); } int main() { // use the function to get the path char CurrentPath[_MAX_PATH]; GetCurrentPath(CurrentPath);//Get the current directory (used for displaying output) fstream bigFile; string filename; int partsize; cout << "Enter a file name: "; cin >> filename; //Recieve target file cout << "Enter the number of bites in each smaller file: "; cin >> partsize; //Recieve volume size bigFile.open(filename.c_str(),ios::in | ios::binary); bigFile.seekg(0, ios::end); // position get-ptr 0 bytes from end int size = bigFile.tellg(); // get-ptr position is now same as file size bigFile.seekg(0, ios::beg); // position get-ptr 0 bytes from beginning for (int i = 0; i <= (size / partsize); i++) { //Build File Name string partname = filename; //The original filename string charnum; //archive number stringstream out; //stringstream object out, used to build the archive name out << "." << i; charnum = out.str(); partname.append(charnum); //put the part name together //Write new file part fstream filePart; filePart.open(partname.c_str(),ios::out | ios::binary); //Open new file with the name built above //Check if near the end of file if (bigFile.tellg() < (size - (size%partsize))) { filePart.write(reinterpret_cast<char *>(&bigFile),partsize); //Write the selected amount to the file filePart.close(); //close file bigFile.seekg(partsize, ios::cur); //move pointer to next position to be written } //Changes the size of the last volume because it is the end of the file else { filePart.write(reinterpret_cast<char *>(&bigFile),(size%partsize)); //Write the selected amount to the file filePart.close(); //close file } cout << "File " << CurrentPath << partname << " produced" << endl; //display the progress of the split } bigFile.close(); cout << "Split Complete." << endl; return 0; } Any ideas? Thanks!

    Read the article

  • kernel 2.6.36 not booting

    - by Saumitra
    Hi, I m a newbie to kernel programming. I am trying to boot the kernel 2.6.36 on my ECG machine.It was working perfectly on 2.6.33.2. It is getting stuck on this step: ## Booting kernel from Legacy Image at 81000000 ... Image Name: Created: 2010-12-27 5:55:56 UTC Image Type: MIPS Linux Kernel Image (gzip compressed) Data Size: 1974278 Bytes = 1.9 MB Load Address: 80100000 Entry Point: 80104730 Verifying Checksum ... OK Uncompressing Kernel Image ... OK Starting kernel ... After this the system either resets or it hangs. I have also checked the configuration & set it properly.Please let me know.

    Read the article

  • 4GB limitation on these embedded/express DBs good enough? what's next if limitation is reached?

    - by edwin.nathaniel
    I'm wondering how long a (theoretically) desktop-app can consume the full 4GB limitation of these express/embedded database products (SQL-Server Express, Oracle Express, SQLite3, etc) provided that big blobs will be stored in filesystem. Also what would be your strategy when it hits the 4GB? Archive the old DB Copy 1-3 months of data to the new DB (consider this as cache strategy?) Start using the new DB from this point onward (How do you access the old data?) I understand that the answer might varies depending on how much data you stored in the table/column. But please describe based on your experience (what kind of desktop-app, write/read heavy, how long will it reach according to your guess).

    Read the article

  • C#, Manage concurrency in database access

    - by Goul
    Hi there, I have written a while ago an application used by multiple users to handle trades creation. I haven't done development for some time now and can't remember how I managed the concurrency between the users and so would have liked your advices in term of design. The application was as follow: - One heavy client per user - A single database - Access to the database for each user to insert/update/delete trades - A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal. My questions: 1- Do you confirm I shouldn't care about the connection to the database for each application. Considering that there is a singleton in each, I would expect on connexion per client with no issue. 2- How preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to. 3- How to have the grid automatically updated whenever the database is (by another user for example)? Thank you in advance for your help!

    Read the article

  • How do manage the limit of executions to be done by hour (Max: 1000 requests per hour) without a dat

    - by cslavoie
    I am currently developing a script in PHP to fetch webpages. The fact is that by doing so, I occasionally do too much requests to a particular website. In order to control any overflow, I would like to keep trace of how many requests have been done in the last hour or so for each domain. It doesn't need to be perfect, just a good estimate. I doesn't have access to a database, except sqlite2. I would really like something really simple because there will typically be a lot of updates, which is kind of heavy for a sqlite database. If no one has a magical solution, I'll go for sqlite, but I was curious what you can come up with Thank you very much

    Read the article

  • Accurate clock in Erlang

    - by buddhabrot
    I was thinking about how to implement a process that gives the number of discrete intervals in time that occurred since it started. Am I losing accuracy here? How do I implement this without loss of accuracy after a while and after heavy client abuse. I am kind of stumped how to do this in Erlang. -module(clock). -compile([export_all]). start(Time) -> register(clock, spawn(fun() -> tick(Time, 0) end)). stop() -> clock ! stop. tick(Time, Count) -> receive nticks -> io:format("~p ticks have passed since start~n", [Count]) after 0 -> true end, receive stop -> void after Time -> tick(Time, Count + 1) end.

    Read the article

  • why does InnoDB keep on growing without for every update?

    - by Akash Kava
    I have a table which consists of heavy blobs, and I wanted to conduct some tests on it. I know deleted space is not reclaimed by innodb, so I decided to reuse existing records by updating its own values instead of createing new records. But I noticed, whether I delete and insert a new entry, or I do UPDATE on existing ROW, InnoDB keeps on growing. Assuming I have 100 Rows, each Storing 500KB of information, My InnoDB size is 10MB, now when I call UPDATE on all rows (no insert/ no delete), the innodb grows by ~8MB for every run I do. All I am doing is I am storing exactly 500KB of data in each row, with little modification, and size of blob is fixed. What can I do to prevent this? I know about optimize table, but I cant do it because on regular usage, the table is going to be 60-100GB big, and running optimize will just stall entire server.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >