Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 318/1728 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • How can I get a iterable resultset from the database using pdo, instead of a large array?

    - by Tchalvak
    I'm using PDO inside a database abstraction library function query. I'm using fetchAll(), which if you have a lot of results, can get memory intensive, so I want to provide an argument to toggle between a fetchAll associative array and a pdo result set that can be iterated over with foreach and requires less memory (somehow). I remember hearing about this, and I searched the PDO docs, but I couldn't find any useful way to do that. Does anyone know how to get an iterable resultset back from PDO instead of just a flat array? And am I right that using an iterable resultset will be easier on memory? I'm using Postgresql, if it matters in this case. . . . The current query function is as follows, just for clarity. /** * Running bound queries on the database. * * Use: query('select all from players limit :count', array('count'=>10)); * Or: query('select all from players limit :count', array('count'=>array(10, PDO::PARAM_INT))); **/ function query($sql_query, $bindings=array()){ DatabaseConnection::getInstance(); $statement = DatabaseConnection::$pdo->prepare($sql_query); foreach($bindings as $binding => $value){ if(is_array($value)){ $statement->bindParam($binding, $value[0], $value[1]); } else { $statement->bindValue($binding, $value); } } $statement->execute(); // TODO: Return an iterable resultset here, and allow switching between array and iterable resultset. return $statement->fetchAll(PDO::FETCH_ASSOC); }

    Read the article

  • Syncing large personal school-material -git-repo with things such as casual notes? Rsync, wget and Git -- or some ready tool?

    - by hhh
    My friend wants to store electrically her school -notes and process them fast, with backups. She has over 2GB -size repo already and growing all the time (mostly appended material i.e. more school notes, different formats, pdf, pictures and scanned, some text -files, etc). The goal of my friend is to process fast the notes. I suggested command like this here i.e. "# crontab -e @weekly wget --random-wait -e robots=off -U mozilla -mirror http://VeryLong.com". But I think plugging in Rsync somewhere could make it much better with Git. How would you help my friend to process and store the school -material under Git-version-controlling and still keep the size reasonable? Perhaps related rsync .git directory rsync git big repository Different scope Git/rsync mix for projects with large binaries and text files What's a good way to organize a large collection of personal scripts using git?

    Read the article

  • How do I reference members of a single object passed to the View?

    - by Juxtaposed
    I'm new to MVC2 in ASP.NET/C#, so please forgive me if I misunderstand something. I have code similar to this in my Controller: var singleInstance = new Person("John"); ViewData["myInstance"] = singleInstance; return View(); So in my view, Index.aspx, I want to be able to reference members in that object. For example, Person has a member called Name, which is set in the constructor. In the view I want to get Person.Name from what is stored in the ViewData object. Ex.: <%= ViewData["myInstance"].name %> That doesn't work. The only real workaround I've found is to do something like this: <% var thePerson = ViewData["myInstance"]; print (or whatever the method is) thePerson.Name; %> Any help would be much appreciated... This was so much easier in PHP/Zend Framework... sigh

    Read the article

  • Can you detect a 301 redirect with Microsoft.XMLHTTP object?

    - by dmb
    I'm using VBScript and the Microsoft.XMLHTTP object to scrape some web data. I have a list of URLs to check, but unfortunately some of them 301 redirect to others on the list, so I wind up with redundant data. Is it at all possible to make the XMLHTTP object fail on 301 redirect? Or at least cache the original response header? Or otherwise just let me know what happened? (notes: I have no control over the server I'm requesting data from; when I get new data, I could check if it's redundant, but I'd like to avoid that if possible). Any ideas would be greatly appreciated.

    Read the article

  • Why does File::Find finished short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system.

    Read the article

  • Mysql: how to generate count table by PATTERN / LIKE statement ? very tricky

    - by Abdulrahman Al-Qabandi
    to simplify, I got a database of registered users, I want to count how many emails there are for each email domain name (note I do not know all domain names) for example, Users table- id | email ------------------ 1 | [email protected] 2 | [email protected]<--unknown to me 3 | [email protected]<--unknown to me 4 | [email protected] 5 | [email protected]<--unknown to me 6 | [email protected] 7 | [email protected] 8 | [email protected] ( note: I want to count each email without specifying which email exactly ) so the result I want is suffix | count hotmail.com | 3 somesite.aaa | 2 unknownsite.aaa| 1 yahoo.com | 2 again, I stress this, I do not know unknownsite.aaa nor can i mention it in a statement because it is unknown to me, i hope I am clear. So essentially I want to make a statistic of what my website users use as an email host website. but like I said and I will repeat the third time, I do not know every mailhost that exists. I am going to investigate this more, I have a feeling this is something mysql cannot handle.

    Read the article

  • What design pattern should be used to create an emulator?

    - by Facon
    I have programmed an emulator, but I have some doubts about how to organizate it properly, because, I see that it has some problems about classes connection (CPU <- Machine Board). For example: I/O ports, interruptions, communication between two or more CPU, etc. I need for the emulator to has the best performance and good understanding of the code. PD: Sorry for my bad English.

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • Is is possible to parse a web page from the client side for a large number of words and if so, how?

    - by Technoh
    I have a list of keywords, about 25,000 of them. I would like people who add a certain < script tag on their web page to have these keywords transformed into links. What would be the best way to go and achieve this? I have tried the simple javascript approach (an array with lots of elements and regexping/replacing each) and it obviously slows down the browser. I could always process the content server-side if there was a way, from the client, to send the page's content to a cross-domain server script (I'm partial to PHP but it could be anything) but I don't know of any way to do this. Any other working solution is also welcome.

    Read the article

  • What's the most scalable way to handle somewhat large file uploads in a Python webapp?

    - by Jason Baker
    We have a web application that takes file uploads for some parts. The file uploads aren't terribly big (mostly word documents and such), but they're much larger than your typical web request and they tend to tie up our threaded servers (zope 2 servers running behind an Apache proxy). I'm mostly in the brainstorming phase right now and trying to figure out a general technique to use. Some ideas I have are: Using a python asynchronous server like tornado or diesel or gunicorn. Writing something in twisted to handle it. Just using nginx to handle the actual file uploads. It's surprisingly difficult to find information on which approach I should be taking. I'm sure there are plenty of details that would be needed to make an actual decision, but I'm more worried about figuring out how to make this decision than anything else. Can anyone give me some advice about how to proceed with this?

    Read the article

  • Where could I get the information about the in-memory layout info of .NET Object Model?

    - by smwikipedia
    I want to know the in-memory representation of .NET constructs such as "interface", "class", "struct", etc. There's an excellent book for C++ object model - <Inside the C++ Object Model by Stanley. Lippman, I want a similar book for .NET and C#. Could someone provide some hints about books and articles? I have read about the "Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects" Anything more? If this info is not publicly avaialble. Shared source one like Mono or Shared Source CLI could be an option. Many thanks.

    Read the article

  • Which file is the COM++ object and how do I import it to .NET?

    - by Bad Man
    I'm trying to write a COM++ object wrapper around a Qt widget (control) I wrote so I can use it in future .NET projects. e.g.: public __gc class comWidget; In the compile directory are the .exe, an exe.intermediate.manifest, and the comWidget.obj, and also some other crap files (.pdb, etc). So what/how do I import into .NET? I feel like I'm missing an important step for registering the object or whatever, but all these tutorials are terrible outdated and ridiculously unhelpful (for instance, I'm using the old CLR syntax because I can't find any good docs on the new stuff, thx again M$ for being lazy faggots as usual)

    Read the article

  • How to properly handle signals when using the worker thread pattern?

    - by ipartola
    I have a simple server that looks something like this: void *run_thread(void *arg) { // Communicate via a blocking socket } int main() { // Initialization happens here... // Main event loop while (1) { new_client = accept(socket, ...); pthread_create(&thread, NULL, &run_thread, *thread_data*); pthread_detach(thread); } // Do cleanup stuff: close(socket); // Wait for existing threads to finish exit(0); ) Thus when a SIGINT or SIGTERM is received I need to break out of the main event loop to get to the clean up code. Moreover most likely the master thread is waiting on the accept() call so it's not able to check some other variable to see if it should break;. Most of the advice I found was along the lines of this: http://devcry.blogspot.com/2009/05/pthreads-and-unix-signals.html (creating a special signal handling thread to catch all the signals and do processing on those). However, it's the processing portion that I can't really wrap my head around: how can I possibly tell the main thread to return from the accept() call and check on an external variable to see if it should break;?

    Read the article

  • Make user object available to all Controllers in Zend?

    - by Sled
    Hey guys, I'm using Zend_Auth to identify a user in my application. This creates a session with the userobject. My question is how do I make this object available to every Controller and action, so I don't have to pull it out of the session every time I need data from this object? I'm guessing this should be done in bootstrap.php or index.php but I don't really know how to makte it available to every controller.. so any code examples would be appreciated! Thanks!

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • How do I push `sed` matches to the shell call in the replacement pattern?

    - by rassie
    I need to replace several URLs in a text file with some content dependent on the URL itself. Let's say for simplicity it's the first line of the document at the URL. What I'm trying is this: sed "s/^URL=\(.*\)/TITLE=$(curl -s \1 | head -n 1)/" file.txt This doesn't work, since \1 is not set. However, the shell is getting called. Can I somehow push the sed match variables to that subprocess?

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

  • How do you specify a really large character in UIButton?

    - by Epsilon Prime
    I have a series of buttons that have suit symbols on them. Currently I provide these suit symbols as bitmaps. In preparation for iPhone 4 I'd like to use text instead. However Interface Builder rescales the button to account for whitespace underneath the symbol so I can't get the image to fill the button completely. Any hints on getting Interface Builder to behave?

    Read the article

  • Why doesn't splicing an object from an array in Javascript return the array?

    - by Allen Gould
    I have an array of objects (say, a deck of cards): var deck = []; deck.push(new Card(suit, rank)); The following seems to work: var card = deck.pop(); var card = deck.shift(); (pulling from the "top" or "bottom" of the deck respectively) But if I want a card from the middle (say, if this was a hand of cards) var card = deck.splice(2,1); The object doesn't seem to get properly assigned to the variable (everything is undefined). Everything I look up says that splice should return the object that I'm removing - what am I missing?

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >