Search Results

Search found 29949 results on 1198 pages for 'large scale website'.

Page 111/1198 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Denormalization of large text?

    - by tesmar
    If I have large articles that need to be stored in a database, each associated with many tables would a NoSQL option help? Should I copy the 1000 char articles over multiple "buckets", duplicating them each time they are related to a bucket or should I use a normalized MySQL DB with lots of Memcache?

    Read the article

  • Python large variable RAM useage

    - by PPTim
    Hi, Say there is a dict variable that grows very large during runtime- up into millions of key:value pairs. Does this variable get stored in RAM,effectively using up all the available memory and slowing down the rest of the system? Asking the interpreter to display the entire dict is a bad idea, but would it be okay as long as one key is accessed at a time? Tim

    Read the article

  • Visualizing the SiteMap of a large (page number) website

    - by Michael
    I'm looking for a tool or service that can spider a web domain with a large number of pages, create a sitemap, and then visualize that map in a way that will help me see, understand and group content (I'm new to the site) Something like a tree-view or other standard Site Map visualizations would be great. I am yet unable to find a tool that does this (I've found plenty of things to spider the site and create an xml file, nothing to visualize it) Thanks!

    Read the article

  • PHP - Large Interger mod calculation

    - by Kami
    I need to calculate modulus with large number like : <?php $largenum = 95635000009453274121700; echo $largenum % 97; ?> It's not working... beacause $largenum is too big for an int in PHP. Any idea how to do this ?

    Read the article

  • Forcing deallocation of large cache object in Java

    - by Jack
    I use a large (millions) entries hashmap to cache values needed by an algorithm, the key is a combination of two objects as a long. Since it grows continuously (because keys in the map changes, so old ones are not needed anymore) it would be nice to be able to force wiping all the data contained in it and start again during the execution, is there a way to do effectively in Java? I mean release the associated memory (about 1-1.5gb of hashmap) and restart from the empty hashmap..

    Read the article

  • problem antlrworks code too large

    - by BB
    In Antlrworks I get this error: [18:21:03] Checking Grammar Grammar.g... [18:21:26] Grammar.java:12: code too large [18:21:26] public static final String[] tokenNames = new String[] { [18:21:26] ^ [18:21:26] 1 error Using instead the generated code in a Java project works normally. What can be had this problem? Thanks.

    Read the article

  • Edit very large xml files

    - by Matt
    I would like to create a text box which loads xml files and let users edit them. However, I cannot use XmlDocument to load since the files can be very large. I am looking for options to stream/load the xml document in chunks so that I do not get out of memory errors -- at the same time, performance is important too. Could you let me know what would be good options?

    Read the article

  • Easiest way to split up a large controller file

    - by timpone
    I have a rails controller file that is too large (~900 lines - api_controller). I'd like to just split it up like something like this: api_controller.rb api_controller_item_admin.rb api_controller_web.rb I don't want to split into multiple controllers. What would be the preferred way to do this? Could I just require the new parts at the end? like: require './api_controller_item_admin' require './api_controller_web'

    Read the article

  • Python: HTTP Post a large file with streaming

    - by Daniel Von Fange
    I'm uploading potentially large files to a web server. Currently I'm doing this: import urllib2 f = open('somelargefile.zip','rb') request = urllib2.Request(url,f.read()) request.add_header("Content-Type", "application/zip") response = urllib2.urlopen(request) However, this reads the entire file's contents into memory before posting it. How can I have it stream the file to the server?

    Read the article

  • Edit very large xml files in c#

    - by Matt
    Hi I would like to create a text box which loads xml files and let users edit them. However, I cannot use XmlDocument to load since the files can be very large. I am looking for options to stream/load the xml document in chunks so that I do not get out of memory errors -- at the same time, performance is important too. Could you let me know what would be good options? Thanks in advance for your help! Matt

    Read the article

  • Use .js files for caching large dropdown lists.

    - by ProfK
    I would like to keep the contents of large UI lists cached on the client, and updated according to criterial or regularly. Client side code can then just fill the dropdowns locally, avoiding long page download times. How can I go about this? I mean, what patterns and strategies would be suitable for this?

    Read the article

  • Increasing load capacity for growing website

    - by markxi
    My website currently runs on a dedicated web server (with LiteSpeed) and dedicated MySQL database server. It's a download based site with a lot of user-generated content, which can be streamed and downloaded, there are also thousands of thumbnails and static content. I'm at the stage where the web server can no longer handle the amount of traffic, so I'm looking a how best to increase capacity considering the large amount of downloadable content. My host suggests mirroring everything on a second web server and distributing the load between them using either DNS Made Easy, or to have my own load balancer (using ldirector) in front of the two web servers. Could anyone advise whether the above method would be the best option? Does any one have any experience with DNS Made Easy and/or ldirector? I'd appreciate any help.

    Read the article

  • Displaying very large images on the iPad

    - by Brodie4598
    I need to display some very large images on the iPad. The files are jpgs and are about 6700x2700 (maps). Is there any way around loading the entire image into memory? Currently I load it int a scroll view for zooming/panning. The images are stored locally on the device.

    Read the article

  • Use of bit-torrent for large file download

    - by questzen
    The company I work for procures large volumes of data and does this by subscribing to FTP. I was wondering if it is possible to download the same using a tracker, the major challenge is authentication of the users IMO. Most ftp servers we subscribe to have a restriction of the number of ftp connection attempts. Does any one here have any experience with this? Any advice is welcome

    Read the article

  • Why am I experiencing random connection timeouts? (CentOS)

    - by Ryan
    I have a CentOS server setup that currently hosts several websites (all relative of each other in some form or another). As of recently throughout the day at the most random times the website speed will lag to a crawl and eventually hit a connection timeout. When I say random times this typically happens anywhere between 10am and 1pm usually, however, this morning this happened to me at 8am. I do not have a lot of familiarity with server knowledge as far as what I am looking for in this situation. What are some possible causes of why my server is slowing the websites down to a complete crawl or timing out? Are there specific things I should be checking for when this happens? I have noticed using: tail /var/log/httpd/access_log That usually when this down time occurs there are lot of IP addresses related to BingBot, Googlebot, and sometimes various bots or spiders that I am unfamiliar with. Could this be related and if so how can I avoid this from causing my websites to lag out? Thanks in advance for any help or advice. The websites that are timing out are built with PHP and use a MySQL database to display information.

    Read the article

  • Android Force Recycle Large Bitmap?

    - by GuyNoir
    From another stackoverflow question, it seems that Android handles large bitmaps differently than other memory. It also seems like there is a way to force Android to recycle the bitmaps to free up memory. Can anyone enlighten me on how to do this. My application uses 2-6 huge bitmaps at all times, so it nearly kills the phone's memory when running, and I want to clear it up when the user quits.

    Read the article

  • SVN: Checking out a large project over slow connection

    - by far
    Hello, I am new to SVN. I want to check out a very large project over a slow connection which takes ages to download. I have zipped versions of project on both remote server and my local which are identical. Is there an easy and quick way to sync my local project with remote server without a full checkout? Thanks

    Read the article

  • large databases in sqlite - file size considerations?

    - by Gj
    I'm using a sqlite db which is very convenient and seems to meet all of my needs at this point. Currently my db size is <50MB, but I now need to add a new table which will store large text blobs, which will cause the db to reach up to 5GB within the next year. Would sqlite be able to deal with a 5GB db size? Any caveats to that, compared with say mysql?

    Read the article

  • Downloading large file with php

    - by Alessandro
    Hi, I have to write a php script to download potentially large files. The file I'm reporting here works fine most of the times. However, if the client's connection is slow the request ends (with status code 200) in the middle of the downloading, but not always at the very same point, and not at the very same time. I tried to overwrite some php.ini variables (see the first statements) but the problem remains. I don't know if it's relevant but my hosting server is SiteGround, and for simple static file requests, the download works fine also with slow connections. I've found Forced downloading large file with php but I didn't understand mario's answer. I'm new to web programming. So here's my code. <?php ini_set('memory_limit','16M'); ini_set('post_max_size', '30M'); set_time_limit(0); include ('../private/database_connection.php'); $downloadFolder = '../download/'; $fileName = $_POST['file']; $filePath = $downloadFolder . $fileName; if($fileName == NULL) { exit; } ob_start(); session_start(); if(!isset($_SESSION['Username'])) { // or redirect to login (remembering this download request) $_SESSION['previousPage'] = 'download.php?file=' . $fileName; header("Location: login.php"); exit; } if (file_exists($filePath)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); //header('Content-Disposition: attachment; filename='.$fileName); header("Content-Disposition: attachment; filename=\"$fileName\""); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); //header('Pragma: public'); header('Content-Length: ' . filesize($filePath)); ob_clean(); flush(); // download // 1 // readfile($filePath); // 2 $file = @fopen($filePath,"rb"); if ($file) { while(!feof($file)) { print(fread($file, 1024*8)); flush(); if (connection_status()!=0) { @fclose($file); die(); } } @fclose($file); } exit; } else { header('HTTP/1.1 404 File not found'); exit; } ?>

    Read the article

  • Hadoop: Processing large serialized objects

    - by restrictedinfinity
    I am working on development of an application to process (and merge) several large java serialized objects (size of order GBs) using Hadoop framework. Hadoop stores distributes blocks of a file on different hosts. But as deserialization will require the all the blocks to be present on single host, its gonna hit the performance drastically. How can I deal this situation where different blocks have to cant be individually processed, unlike text files ?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >