Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 150/238 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Looking for Unix tool/script that, given an input path, will compress every batch of uncompressed 100MB text files into a single gzip file

    - by newToFlume
    I have a dump of thousands of small text files (1-5MB) large, each containing lines of text. I need to "batch" them up, so that each batch is of a fixed size - say 100MB, and compress that batch. Now that batch could be: A single file that is just a 'cat' of the contents of the individual text files, or Just the individual text files themselves Caveats: unix split -b will not work here as I need to keep lines of text intact. Using the lines option is a bit complicated as there is a large variance in the number of bytes in each line. The files need not be a fixed size strictly, as long as it's within 5% of the requested size The lines are critical, and should not be lost: I need to confirm that the input made its way to output without loss - what rolling checksum (something like CRC32, BUT better/"stronger" in face of collisions) A script should do nicely, but this seems like a task someone has done before, and it would be nice to see some code (preferably python or ruby) that does atleast something similar.

    Read the article

  • How to test if a user has uploaded a file ?

    - by Tristan
    Hello, on a page, i have : if (!empty($_FILES['logo']['name'])) { $dossier = 'upload/'; $fichier = basename($_FILES['logo']['name']); $taille_maxi = 100000; $taille = filesize($_FILES['logo']['tmp_name']); $extensions = array('.png', '.jpg', '.jpeg'); $extension = strrchr($_FILES['logo']['name'], '.'); //Début des vérifications de sécurité... if(!in_array($extension, $extensions)) { $erreur = 'ERROR you must upload the right type'; } if($taille>$taille_maxi) { $erreur = 'too heavy'; } if(!empty($erreur)) { ....................... } The problem is, if the users wants to edit information WITHOUT uploading a LOGO, it raises an error : 'error you must upload the right type' So, if a user didn't put anything in the inputbox in order to upload it, i don't want to enter in these conditions test. i tested : if (!empty($_FILES['logo']['name']) and if (isset($_FILES['logo']['name']) but both doesn't seems to work. Any ideas? thanks.

    Read the article

  • Model of hql query firing at back end by hql engine?

    - by Maddy.Shik
    I want to understand how hibernate execute hql query internally or in other models how hql query engine works. Please suggest some good links for same? One of reason for reading is following problem. Class Branch { //lazy loaded @joincolumn(name="company_id") Company company; } Since company is heavy object so it is lazy loaded. now i have hql query "from Branch as branch where branch.Company.id=:companyId" my concern is that if for firing above query, hql engine has to retrieve company object then its a performance hit and i would prefer to add one more property in Branch class i.e. companyId. So in this case hql query would be "from Branch as branch where branch.companyId=:companyId" If hql engine first generate sql from hql followed by firing of sql query itself, then there should be no performance issue. Please let me know if problem is not understandable.

    Read the article

  • Hibernate is performing unwanted SELECTs on call to saveOrUpdate

    - by digiarnie
    Let's say I have a House entity which maps to many Person entities. I then load an existing House which has 20 occupants. beginTransaction(); House house = houseDao.find(1L); commitTransaction(); Later in the code, I can then add a new Person to the House: ... List<Person> people = house.getPeople(); people.add(new Person("Dilbert")); .... When I make the call: session.saveOrUpdate(house); Hibernate performs 21 queries: 1 to SELECT the House and 20 to SELECT each existing Person in the House. I'm sure it's a small issue on my part, however, what should I do so that I can add a new Person to the house without having such a heavy hit on the database in this situation? This is all done within the same session.

    Read the article

  • Configure redis to not have everything in memory?

    - by acidzombie24
    I like redis because it lets me do operations on data structures. I wanted to see what would happen if i were to put more data into redis then i have for ram. So i wrote a loop that inserted 30k bytes repeatedly and set maxmemory 100MB. I figure it would stay at 100mb. It kept growing. Past 1gb then past 2gb. Suddenly it crashed because i ran the 32bit version. Now... i dont understand what the point of maxmemory is? I am using the windows version so maybe its ignored. Does redis have to have everything in memory? If i have a site (on linux) with a 10gb database and 512mb on the machine will redis work? I don't need it to be amazingly fast i just prefer to modify data in it then sql (although i hope it is still faster then mysql)

    Read the article

  • error in assigning a const character to an unsigned char array in C++

    - by mekasperasky
    #include <iostream> #include <fstream> #include <cstring> using namespace std; typedef unsigned long int WORD; /* Should be 32-bit = 4 bytes */ #define w 32 /* word size in bits */ #define r 12 /* number of rounds */ #define b 16 /* number of bytes in key */ #define c 4 /* number words in key */ /* c = max(1,ceil(8*b/w)) */ #define t 26 /* size of table S = 2*(r+1) words */ WORD S [t],L[c]; /* expanded key table */ WORD P = 0xb7e15163, Q = 0x9e3779b9; /* magic constants */ /* Rotation operators. x must be unsigned, to get logical right shift*/ #define ROTL(x,y) (((x)<<(y&(w-1))) | ((x)>>(w-(y&(w-1))))) #define ROTR(x,y) (((x)>>(y&(w-1))) | ((x)<<(w-(y&(w-1))))) void RC5_DECRYPT(WORD *ct, WORD *pt) /* 2 WORD input ct/output pt */ { WORD i, B=ct[1], A=ct[0]; for (i=r; i>0; i--) { B = ROTR(B-S [2*i+1],A)^A; A = ROTR(A-S [2*i],B)^B; } pt [1] = B-S [1] ;pt [0] = A-S [0]; } void RC5_SETUP(unsigned char *K) /* secret input key K 0...b-1] */ { WORD i, j, k, u=w/8, A, B, L [c]; /* Initialize L, then S, then mix key into S */ for (i=b-1,L[c-1]=0; i!=-1; i--) L[i/u] = (L[i/u]<<8)+K[ i]; for (S [0]=P,i=1; i<t; i++) S [i] = S [i-1]+Q; for (A=B=i=j=k=0; k<3*t; k++,i=(i+1)%t,j=(j+1)%c) /* 3*t > 3*c */ { A = S[i] = ROTL(S [i]+(A+B),3); B = L[j] = ROTL(L[j]+(A+B),(A+B)); } } void printword(WORD A) { WORD k; for (k=0 ;k<w; k+=8) printf("%02.2lX",(A>>k)&0xFF); } int main() { WORD i, j, k, pt [2], pt2 [2], ct [2] = {0,0}; unsigned char key[b]; ofstream out("cpt.txt"); ifstream in("key.txt"); if(!in) { cout << "Cannot open file.\n"; return 1; } if(!out) { cout << "Cannot open file.\n"; return 1; } key="111111000001111"; RC5_SETUP(key); ct[0]=2185970173; ct[1]=3384368406; for (i=1;i<2;i++) { RC5_DECRYPT(ct,pt2); printf("\n plaintext "); printword(pt [0]); printword(pt[1]); } return 0; } When I compile this code, I get two warnings and also an error saying that I can't assign a char value to my character array. Why is that?

    Read the article

  • Strange Phantom Local Disks appearing in my drive list...

    - by Paul
    Win7 Home Prem 32 bit I seem to have several phantom Local Disks mapped to different letters, they are of 0 bytes in size? Strangely they do not show up when i view my drives through windows explorer but if i open an application such as ACDSee Pro or MS Word and then go to open a file i can see all these Local Disks mapped to different letters. This means when i plug in my external hard disk it ends up mapped to letter R instead of its usual G which messes up any programs i have pointing to it by default. How did they get there and more importantly how do i get rid of them please??

    Read the article

  • building an ASP NET MVC site, should i go with linq to sql?

    - by aspm
    so i'm about to start a new website from scratch and i've spent about a week trying to figure out what technology to go with. i'm sold on ASP NET MVC. i'm 100% sure i'm going to love using that. but what i am not so sure about yet is using LINQ 2 SQL. so far i've gathered some data... 1) stack overflow uses it - can't be that bad 2) can be REALLY slow if you don't take advantage of compiled queries 3) will always be slower than ADO net, but can be almost just as fast if using #2 in the proper places 4) is NOT the preferred MS solution (there was a thread here on SO about dropping support) i'm itching to use it, but just want to make sure it's the best for me. i come from a heavy ADO/stored procedure and traditional asp net background (this will be my first experience with ASP MVC).

    Read the article

  • SQL Compact Edition 3.5 SP 1 - LockTimeOutException - how to debug?

    - by Bob King
    Intermittently in our app, we encounter LockTimeoutExceptions being throw from SQL CE. We've recently upgraded to 3.5 SP 1, and a number of them seem to have gone away, but we still do see them occasionally. I'm certain it's a bug in our code (which is multi-threaded) but I haven't been able to pin it down precisely. Does anyone have any good techniques for debugging this problem? The exceptions log like this (there's never a stack trace for these exceptions): SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 6,Thread id = 7856,Process id = 10116,Table name = Product,Conflict type = s lock (x blocks),Resource = DDL ] Our database is read-heavy, but does seldom writes, and I think I've got everything protected where it needs to be. EDIT: SQL CE already automatically uses NOLOCK http://msdn.microsoft.com/en-us/library/ms172398(sql.90).aspx

    Read the article

  • How to tune Windows 2008r2 and IIS to maximize single file download speeds?

    - by uSlackr
    We recently put up an IIS site (on WinSvr 2008r2) that is used almost exclusively for downloading files over the internet. The data exists as a large collection of .zip files ranging from 1MB - 35GB in size. We want to allow a lot of downloads during a day (more than 500GB) but have implemented an outbound ASA throttle at 60mbps in order to preserve bandwidth for other uses. The total link speed is 100mbps. Here's the interesting part: While we can serve up multiple downloads to hit the 60mbps cap, we cannot get any single download to exceed 2.5M bytes/sec (20 Mbits/s). Is there any TCP or IIS tuning we can do to push up individual download speeds? Or something else to look at?

    Read the article

  • Stored Procedure or calculations via IQueryable?

    - by Shawn Mclean
    This is a question that is based on choosing performance over design practices. If I have a method that will be executed many times a second; public static IQueryable<IPerson> InRadius(this IQueryable<IPerson> query, Coordinate center, double radius) { return (from u in query where CallHeavyMathFormula(u, center, radius) select u); } This extension method for IQueryable generates a SQL that does some heavy maths calculation (Cosine, Sine, etc). This would mean the application sends 1-2KB of sql to the server per call. I've heard of placing all application logic, in your application. I also would like to change to a database such as azure or one of those scalable databases in the future. How do I handle something like this? Should I leave it as it is now or write stored procedures? How do applications like twitter or facebook do it?

    Read the article

  • How to simulate :active css pseudo class in android on non-link elements?

    - by chas s.
    I'd like to be able to mimic the behavior of the :active pseudo class on all elements in Android webkit. Currently, the :active syntax only works on a elements (links). Nearly all of the actionable elements in the app I'm working on are something other than a standard link tag. iOS webkit supports :active on all elements. /* works on both android iOS webkit */ a:active { color: blue; } /* works on iOS webkit, does not work on android webkit */ div:active { color: red; } I've found a couple of resources [1,2] that solve similar problems, but they're both a bit heavy, and I'm wondering if there's a lighter weight solution that I'm just not able to find. http://cubiq.org/remove-onclick-delay-on-webkit-for-iphone http://code.google.com/intl/ro-RO/mobile/articles/fast_buttons.html

    Read the article

  • Somebody knows why the sectors of the IBM floppy disk are named 1 to 8 (and not 0 to 7 )

    - by Olivier Briand
    I am now programming on a 8 bits Z80 computer with CP/M 2.2, (as a hobby) and the floppy disk format is IBM, 40 tracks, 8 sectors per track, 512 bytes per sector. free space is 154 Ko on each face of the disk. Why the sectors are indexed 1 to 8 (and not zero to seven, as usually is seen with computers)? The catalog of the floppy disk is on the track 1 (sector 1 to 4, 64 entries). I'm wondering is the catalog on track zero? Is the zero track reserved to included a system (as track 0 & 1 are reserved to the system on a CP/M floppy disk, and catalog is on track 2)?

    Read the article

  • "bin/sh: can't access tty; job control turned off” error when running shellcode"

    - by Nosrettap
    I'm writing shellcode to exploit a buffer overflow vulnerability on a server. To do so I have port binding shellcode that I send to the server and then I run (from a linux terminal) the command telnet serverAdress 4444 where 4444 is the port which I have opened up. The hope is that I will receive a shell back that I can use to execute commands. However, I always end up with the command bin/sh: can't access tty; job control turned off I can't change any of the server code, and I believe the shellcode is correct because I got it from this website (http://www.tsirogiannis.com/exploits-vulnerabilities-videos-papers-shellcode/linuxx86-port-binding-shellcode-xor-encoded-152-bytes/). From my research, it appears that this may have to do with the mode that my terminal is running in (something called interactive mode...or something like that). All computers involved are linux machines and the machine that I am on is running the latest version of Ubuntu. Any ideas what this job control error means and how I can fix it?

    Read the article

  • How can I clear the appcache on the Google Chrome iPad app?

    - by Jannis
    I've written a little HTML5 based web app that I am trying to debug on the iPad using the Chrome for iPad app. I have added a cache.manifest file to my app which has some heavy caching in it of most static resources however since I am now wanting to debug the app I need a way to clear this cache. I know that on Chrome for Mac you can use: chrome://appcache-internals/ however this page does not exist in the iPad app of Chrome. The regular "Clear Browsing Data" does not empty the appcache —at least not in my case. Does anyone know how I can clear the appcache for the Chrome iPad app?

    Read the article

  • Does Qt support virtual pure slots ?

    - by ereOn
    Hi, My GUI project in Qt has a lot of "configuration pages" classes which all inherit directly from QWidget. Recently, I realized that all these classes share 2 commons slots (loadSettings() and saveSettings()). Regarding this, I have two questions: Does it make sense to write a intermediate base abstract class (lets name it BaseConfigurationPage) with these two slots as virtual pure methods ? (Every possible configuration page will always have these two methods, so I would say "yes") Before I do the heavy change in my code (if I have to) : does Qt support virtual pure slots ? Is there anything I should be aware of ? Here is a code example describing everything: class BaseConfigurationPage : public QWidget { // Some constructor and other methods, irrelevant here. public slots: virtual void loadSettings() = 0; virtual void saveSettings() = 0; }; class GeneralConfigurationPage : public BaseConfigurationPage { // Some constructor and other methods, irrelevant here. public slots: void loadSettings(); void saveSettings(); };

    Read the article

  • How to use JNI, but only when available for current platform?

    - by Mecki
    What is the common way (or best practice) to optionally use JNI? E.g. I have a Java class and this class is 100% pure Java, so it can run on all platforms. However, on some platforms I'd like to speed up some heavy calculations using JNI - which works fine. Unfortunately I cannot support any existing Java platform in the world. So I guess it is fine to initially only support the big three: Linux, Windows, Mac OS X. So what I'd like to do is to use JNI on those three platforms and use the 100% pure Java version on all other platforms. Now I can think of various ways how to do that (loading class dynamically for example and either loading the JNI class or the pure Java one), but thinking that this is a common issue that thousands of projects had to solve in the past for sure, I'm really surprised to not find any documentation or references to the question how to solve this most elegantly or effectively.

    Read the article

  • How do I run multiple ruby scripts sequentially on my local machine?

    - by marcamillion
    I have about 5 or 6 ruby scripts I want to run, right after each other. These are all on my local machine (OS X) and won't be run on a server. Each takes about 15 minutes to run, and I don't want to have to wait for each one to finish before running the others manually. Without using something as heavy as delayed_job or some other queueing gem, how can I achieve this? Or should I go through the hassle of setting up sidekiq or something else? Thanks. P.S. It would be nice to restart the script if one of them times out (I am doing web crawling, so keeping an HTTP connection open sometimes gives me issues) - which happens occasionally.

    Read the article

  • Tool to measure Render time

    - by Noob
    Hi Folks, Is there a tool out there to measure the actual Render time of an element(s) on a page? I don't mean download time of the resources, but the actual time the browser took to render something. I know that this time would vary based on factors on the client machine, but would still be very handy in knowing what the rendering engine takes a while to load. I would imagine this should be a useful utility since web apps are becoming pretty client heavy now. Any thoughts?

    Read the article

  • event vs thread programming on server side.

    - by AlxPeter
    We are planning to start a fairly complex web-portal which is expected to attract good local traffic and I've been told by my boss to consider/analyse node.js for the serve side. I think scalability and multi-core support can be handled with an Nginx or Cherokee up in the front. 1) Is this node.js ready for some serious/big business? 2) Does this 'event/asynchronous' paradigm on server side has the potential to support the heavy traffic and data operation ? considering the fact that 'everything' is being processed in a single thread and all the live connections would be lost if it got crashed (though its easy to restart). 3) What are the advantages of event based programming compared to thread based style ? or vice-versa. (I know of higher cost associated with thread switching but hardware can be squeezed with event model.) Following are interesting but contradicting (to some extent) papers:- 1) http://www.usenix.org/events/hotos03/tech/full_papers/vonbehren/vonbehren_html 2) http://pdos.csail.mit.edu/~rtm/papers/dabek:event.pdf

    Read the article

  • jQuery UI .widgets resetting scroll positions of elements contained within

    - by Derek Adair
    I am making heavy use of jQuery UI with my latest project. Unfortunately I've hit a major wall due to some really whacky behavior exhibited by the jQuery UI widgets when they contain elements with scrollbars for overflow. Check out this demo Scroll down in one of the .scroll-container elements Click an accordion header Click on old header - note the element was auto-scrolled to the top. Is there anyway to prevent this from happening? It's screwing with a major plugin of mine that utilizes jQuery scrolling. I'm flat-out lost as to what to do here! Perhaps this is a bug worth mentioning in the jQuery UI dev forums... EDIT I am using Chrome - 8.0.552.231 and OSX 10.6.5

    Read the article

  • Why is my /dev/random so slow when using dd?

    - by Mikey
    I am trying to semi-securely erase a bunch of hard drives. The following is working at 20-50Mb/s dd if=/dev/zero of=/dev/sda But dd if=/dev/random of=/dev/sda seems not to work. Also when I type dd if=/dev/random of=stdout It only gives me a few bytes regardless of what I pass it for bs= and count= Am I using /dev/random wrong? What other info should I look for to move this troubleshooting forward? Is there some other way to do this with a script or something like makeMyLifeEasy | dd if=stdin of=/dev/sda Or something like that...

    Read the article

  • PDF - re/generate image using stream content

    - by tom_tap
    I have pdf file with 8 content streams (bytes) which behave like image layers (but they are not layers that I can turn off/on in Adobe Reader). I would like to extract these images separately, because they overlap each other (thus I am not able to "Take a Snapshot" or "Copy File to Clipboard"). So now I have these streams in below format: <Start Stream> q 599.7601 0 0 71.99921 5951.03423 4282.48177 cm /Im0 Do Q q 599.7601 0 0 71.99921 5951.03432 4210.48177 cm /Im1 Do Q q 599.7601 0 0 71.99921 5951.03441 4138.48177 cm /Im2 Do [...] My question is: how to use these data to generate or regenerate these images to be able to save it as raster or vector file? I have already tried pstoedit, but it doesn't work properly beacuse of these multi streams. Same with PDFedit.

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >