Search Results

Search found 28880 results on 1156 pages for 'check disk'.

Page 650/1156 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • What's exactly the nature of .Net Framework 3.5 Service Pack1?

    - by Richard77
    Hello, I did some recovery disk operations with my laptop because it became instable. Now, I'm about to re-install SQL Server 2008 Professional, but it keeps telling me that I need to install .net framework 3.5 with service pack1. What's strange is I was asked to do the same when I installed Visual Studio 2008 Professional earlier. I'd like to know: what's .Net Framework Service Pack1? is it One piece of software sitting on top of windows ? Several software having the same name so that Visual Studio has it, SQL Server also has it, and do on... And why the naming Service Pack1 behind .Net Framework? I'm really lost. Thanks for helping

    Read the article

  • Large Video Uploads via a website

    - by Andrew
    Some of the problems that can happen are timeouts, disconnections, and not being able to resume a file and having to start from the beginning. Assuming these files are up to around 5gigs in size, what is the best solution for dealing with this problem? I'm using a Drupal 6 install for the website. Some of my constraints due to the server setup I have to deal with: Shared hosting with max 200 connections at a time (unlimited disk space) Shared hosting. Unable to create users through an API (so can't automatically generate ftp accounts) I do have the ability to run cron-type scripts via a Drupal module. My initial thought was to create ftp users based off of Drupal accounts and requiring them to download an ftp client for their OS of choice. But the lack of API to auto-create ftp accounts and the inability to do it from command line kind of hinder that solution. If there's a workaround someone can think of, let me know! Thanks

    Read the article

  • Rails ActiveRecord - How to set association save order

    - by Altonymous
    I have a weird relationship that needs to be maintained for legacy processes. I'm trying to figure out how to create the relationship given the new model association. New Relationship Setup Machine has_many MachineReadings has_many Disks has_many DiskReadings Old Relationship Setup Machine has_many MachineReadings has_many DiskReadings has_many Disks The problem is data will come in on the Machine model as nested attributes using the new relationship setup. I need to update the machine_reading_id in the DiskReading model so the old association can continue to be used. I tried doing this via an after_save hook that would traverse back up to the machine and then down to the readings to get the machine_reading.id so I could populate the DiskReading model. However, the associations aren't being saved in the order I would expect. They are saving the Disks & DiskReadings before saving the MachineReadings. So when I go after the machine_reading.id it hasn't been written and thus I am unable to get access to it. For example: #machine_disk_reading.rb after_save :build_old_relationship def build_old_relationship self.machine_reading_id = self.disk.machine.readings.find_by_date_time(self.date_time).id end

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • Color Themes for Eclipse?

    - by John Stauffer
    I am a recovering Emacs user, who is trying to ease into Eclipse usage. (Since I'm encouraging the rest of the team to use it, I guess I should at least try to get along). My current excuse is that it hurts my eyes. I'm currently using the excellent zenburn theme in emacs, and would love to find it for eclipse. However, I find that changing my color theme every few months makes for a great way to procrastinate, so ideally I'd like to find a repository for eclipse color themes. There don't appear to be any eclipse themes indexed by Google, so all the great themes must be sitting on your hard disk somewhere. Please share them. Thanks

    Read the article

  • Return an object after parsing xml with SAX

    - by sentimental_turtle
    I have some large XML files to parse and have created an object class to contain my relevant data. Unfortunately, I am unsure how to return the object for later processing. Right now I pickle my data and moments later depickle the object for access. This seems wasteful, and there surely must be a way of grabbing my data without hitting the disk. def endElement(self, name): if name == "info": # done collecting this iteration self.data.setX(self.x) self.data.setY(self.y) elif name == "lastTagOfInterest": # done with file # want to return my object from here filehandler = open(self.outputname + ".pi", "w") pickle.dump(self.data, filehandler) filehandler.close() I have tried putting a return statement in my endElement tag, but that does not seem to get passed up the chain to where I call the SAX parser. Thanks for any tips.

    Read the article

  • sharpziplib - can you add a file without it copying the entire zip first?

    - by schmoopy
    Im trying to add an existing file to a .zip file using sharpziplib - problem is, the zip file is 1GB in size. When i try to add 1 small file (400k) sharpziplib creates a copy/temp of the orig zip file before adding the new file - this poses a problem when the amount of free disk space is less than 2x the zip file you are trying to update. for example: 1GB zip myfile.zip 1GB zip myfile.zip.tmp.293 ZipFile zf = new ZipFile(path); zf.BeginUpdate(); zf.Add(file); // Adding a 400k file here causes a 1GB temp file to be created zf.EndUpdate(); zf.Close(); Is there a more efficient way to do this? Thanks :-)

    Read the article

  • iPhone App takes up too much memory

    - by Stephen Furlani
    Ok, so here's my problem. My iPhone app is 1.2MB on disk. Granted I have a bunch of Images for the GUI buttons and backgrounds, etc. In-memory, my app takes up a whopping 15MB! That means if I then take a picture with the camera, 8MB default, it gives a memory warning (several) even before the picker calls its delegate! How can I tell what is grabbing so much memory, and how to remove it? I've removed all of my debugging symbols and added [-Os], but it still takes up a huge amount of memory! Also, (how) can I change the default resolution of the camera?

    Read the article

  • guide on crawling the entire web ?

    - by bohohasdhfasdf
    i just had this thought, and was wondering if it's possible to crawl the entire web (just like the big boys!) on a single dedicated server (like Core2Duo, 8gig ram, 750gb disk 100mbps) . I've come across a paper where this was done....but i cannot recall this paper's title. it was like about crawling the entire web on a single dedicated server using some statistical model. Anyways, imagine starting with just around 10,000 seed URLs, and doing exhaustive crawl.... is it possible ? I am in need of crawling the web but limited to a dedicated server. how can i do this, is there an open source solution out there already ? for example see this real time search engine. http://crawlrapidshare.com the results are exteremely good and freshly updated....how are they doing this ?

    Read the article

  • Visual C++ 9 Linker file size limitation.

    - by Raindog
    It appears that the visual C++ 9 linker has a file allocation algorithm that doubles the size of the file every allocation, so you get 512mb, 1024mb, 2048mb, 4096mb. The problem is that it is using a library that cannot handle files larger 2048MB, and as such crashes with an error such as "cannot read file at is the disk full or write protected". Is there a way to bypass this limitation or otherwise replace the linker with something else that works? A bit of background, I have a code generator that generates a large number of files, ~15k cpp files, I've managed to reduce the number of files to something about 6k to get something that at least completes the linking process, I would like to be able to include all 15k without having to create multiple libs.

    Read the article

  • Streaming data to the browser as a file of unknown size

    - by Sir Psycho
    I have some data which is queried from the database and I'd like to send it to the client as a csv file. The file size varies each time due to the fact that the DB data returned can be of any size. Instead of saving this file to the hard disk, I'd like to send it to the browser at the same time it's being processed into a CSV by my algorithm. Response.Write seems useless. For some reason, the file download dialog is only displayed once my processing is finished. This seems odd as I'm writting all my output to the Response.Output stream. I have downloaded files on the web before where the filesize is not known and the browser just keeps on downloading. Is there any way to achieve this? The following stackoverflow thread did not offer any good advise. http://stackoverflow.com/questions/873995/asp-net-downloading-large-files-of-unknown-size Thanks

    Read the article

  • Using nsIZipWriter or other to compress a string as a string?

    - by Daniel
    I need to be able to take a javascript string, compress it using any fast and available means and get back a binary string/blob. Background: The extension I'm developing needs to send various large content to my server. It does this conveniently by dynamically creating a form, adding fields to the form and posting it. Some of these fields are just too big bandwidth wise for multiple use. I'd like to be able to compress them before adding them and then maybe base64'ing them if the characters cause a problem in the message. Any ideas? I could use nsiZipWriter with temporary files on disk but that is quite ugly and probably sluggish.

    Read the article

  • fast on-demand c++ compilation [closed]

    - by Amit Prakash
    I'm looking at the possibility of building a system where when a query hits the server, we turn the query into c++ code, compile it as shared object and the run the code. The time for compilation itself needs to be small for it to be worthwhile. My code can generate the corresponding c++ code but if I have to write it out on disk and then invoke gcc to get a .so file and then run it, it does not seem to be worth it. Are there ways in which I can get a small snippet of code to compile and be ready as a share object fast (can have a significant start up time before the queries arrive). If such a tool has a permissive license thats a further plus. Edit: I have a very restrictive query language that the users can use so the security threat is not relevant. My own code translates the query into c++ code. The answer mentioning clang is perfect.

    Read the article

  • messages stuck permanently in session

    - by Tim Whitlock
    I am getting Drupal messages stuck permanently in session, so that after being displayed they are not cleared. The unsetting code in function drupal_get_messages in bootstrap.inc is firing - It's as if the session is sleeping (i.e. serializing to disk) before the messages array is cleared. Have you witnessed such a thing? UPDATE The call that commits the session starts from drupal_page_footer at the bottom of index.php - for some reason this is executing twice per request! once with the emptied messages and then again with the messages back in the array.

    Read the article

  • Organizing PHP includes in your development environment

    - by Andrew Heath
    I'm auditing my site design based on the excellent Essential PHP Security by Chris Shiflett. One of the recommendations I'd like to adopt is moving all possible files out of webroot, this includes includes. Doing so on my shared host is simple enough, but I'm wondering how people handle this on their development testbeds? Currently I've got an XAMPP installation configured so that localhost/mysite/ matches up with D:\mysite\ in which includes are stored at D:\mysite\includes\ In order to keep include paths accurate, I'm guess I need to replicate the server's path on my local disk? Something like D:\mysite\public_html\ Is there a better way?

    Read the article

  • Generate a pdf thumbnail (open source/free)

    - by AndrewB
    Looking at other posts for this could not find an adequate solution that for my needs. Trying to just get the first page of a pdf document as a thumbnail. This is to be run as a server application so would not want to write out a pdf document to file to then call a third application that reads the pdf to generate the image on disk. doc = new PDFdocument("some.pdf"); page = doc.page(1); Image image = page.image; Thanks.

    Read the article

  • Cache images provided through script

    - by Wim Haanstra
    I have a script, which by using several querystring variables provides an image. I am also using URL rewriting within IIS 7.5. So images have an URL like this: http://mydomain/pictures/ajfhajkfhal/44/thumb.jpg or http://mydomain/pictures/ajfhajkfhal/44.jpg This is rewritten to: http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44&thumb=thumb.jpg or http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44 I added caching rules to IIS to cache JPG images when they are requested. This works with my images that are REAL images on the disk. When images are provided through the script, they are somehow always requested through the script, without being cached. The images do not change that often, so if the cache at least is being kept for 30 minutes (or until file change) that would be best. I am using .NET/C# 4.0 for my website. I tried setting several cache options in C#, but I cant seem to find how to cache these images (client-side), while my static images are cached properly.

    Read the article

  • Need a Concatenating VBA code to prevent memory issue workaround

    - by doharr
    My set up: Have 50,000 rows of data. ( My row count will increase in the future. So might as well say I have a full worksheet of 64000+ rows.) All Data is TEXT, no formulas, etc. Column A is open Columns B thru AC contain the Data that needs to be concatenated The Data in the rows once concatenated to Column A will contain 60,000 digits or 6kb in file size. After additional maniuplation each cell will become a file. I have tried concatenating in Excel and I run into memory issues. The memory issue is when I Select and fill down the concatenating function into the worksheet. It crashes at the 8200 +/-row. My system is 2gb of ram, windows xp professional and Excel 2003. Have 4GB of disk space Hoping to find a VBA code that will conserve memory, and not crash like it does in excel. Thank you

    Read the article

  • Does a Perl module know where it is installed?

    - by CoffeeMonster
    Hi Perl programmers, I have started creating a Perl package that contains a default email template. The MANIFEST looks something like: SendMyEmail.pm SendMyEmail/defualt_email.tt Currently I know where the module (and the template) are - but does the module itself know where on disk it is? So could the module find the default template without my help? # This is what I would like to do. package SendMyEmail; sub new { my ($self, $template) = @_; $template ||= $dir_of_SendMyEmail .'/SendMyEmail/default_email.tt'; # ?? } Is there a better way of including a templates text, or a better place to put the template? Any references to CPAN modules that do something similar would be welcome. Thanks in advance.

    Read the article

  • Solr adding document cycle & wait on response issue

    - by user1585896
    I am trying to send http post request to Solr for adding 50000 documents (all individual request one after another in while loop). I am using DefaultHttpClient in java to connect to Solr and when I use execute method on my HttpPost Solr takes 3 to 4 ms to response. I have commit=false, autoCommit=false, autoSoftCommit=false. My question is why it takes that much time to response and why cycle it follows to add new document. Basically I want to send add request but do not want to commit to see how many request can Solr handle without doing any kind of commits(without having to do any disk access). My guess is with above parameter tuned off I should be hitting Solr about 10000 times every second, but my result is 300 times a second. I am generating random data to add in my code.

    Read the article

  • Is shortening properties names worth it?

    - by raam86
    in how to node Blog rolling with node.js and mongoDB the author mentions it's a good idea to shorten proprieties names: ....oft-reported issue with mongoDB is the size of the data on the disk... each and every record stores all the field-names .... This means that it can often be more space-efficient to have properties such as 't', or 'b' rather than 'title' or 'body', however for fear of confusion I would avoid this unless truly required! I am aware of solutions of how to do it I am more intrested in when is it truly required?

    Read the article

  • How can I measure file access performance (and volume) of a (Java) application

    - by stmoebius
    Given an application, how can I measure the amount of data read and written by that application? the time spent reading/writing to disk? The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64. The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given. File access may be local as well as on network shares. Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation). Any ideas?

    Read the article

  • Google App Engine says "Must authenticate first." while trying to deploy any app

    - by Oleksandr Bolotov
    Google App Engine says "Must authenticate first." while trying to deploy any app: me@myhost /opt/google_appengine $ python appcfg.py update ~/sda2/workspace/lyapapam/ Application: lyapapam; version: 1. Server: appengine.google.com. Scanning files on local disk. Scanned 500 files. Scanned 1000 files. Initiating update. Email: <my_email_was_here>@gmail.com Password for <my_email_was_here>@gmail.com: Error 401: --- begin server output --- Must authenticate first. --- end server output --- We are getting this message with any appliation and under any developer account avialable to us That's what we have installed: App Engine SDK - 1.3.2 PIL - 1.1.7 Python - 2.5.5 pip - 0.6.3 ssl - 1.15 wsgiref - 0.1.2 How can I fix it? Is it well known problem?

    Read the article

  • Serialize an object to string

    - by Vaccano
    I have the following method to save an Object to a file: // Save an object out to the disk public static void SerializeObject<T>(this T toSerialize, String filename) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); TextWriter textWriter = new StreamWriter(filename); xmlSerializer.Serialize(textWriter, toSerialize); textWriter.Close(); } I confess I did not write it (I only converted it to a extension method that took a type parameter). Now I need it to give the xml back to me as a string (rather than save it to a file). I am looking into it, but I have not figured it out yet. I thought this might be really easy for someone familiar with these objects. If not I will figure it out eventually.

    Read the article

  • save state of external running program

    - by SuitUp
    Hi, my goal is to save state of running process and later start from this point. This is not necessary need to be state from this exact moment, let's say save after 5 seconds is good too. Problem is, i can't alter in code of this external program, i am even don't aware of it's architecture. I have resources to save whole program to disk from memory, but i need some tips, where to start. I can't use any VM like Virtual Box to save state of whole operating system, and program can be written in c++ or c or c#...

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >