Search Results

Search found 39577 results on 1584 pages for 'temp files'.

Page 418/1584 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Painless way to install a new version of R?

    - by Shane
    Andrew Gelman recently lamented the lack of an easy upgrade process for R (probably more relevant on Windows that Linux). Does anyone have a good trick for doing the upgrade, from installing the software to copying all the settings/packages over? This suggestion was contained in the comments and is what I've been using recently. First you install the new version, then run this in the old verion: #--run in the old version of R setwd("C:/Temp/") packages <- installed.packages()[,"Package"] save(packages, file="Rpackages") Followed by this in the new version: #--run in the new version setwd("C:/Temp/") load("Rpackages") for (p in setdiff(packages, installed.packages()[,"Package"])) install.packages(p)

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Backup of folder + database - Python

    - by RadiantHex
    Hi there, I feel like this is quite delicate, I have various folders whith projects I would like to backup into a zip/tar file, but would like to avoid backing up files such as pyc files and temporary files. I also have a Postgres db I need to backup. Any tips for running this operation as a python script? Also, would there be anyway to stop the process from hogging resources in the process? Help would be very much appreciated.

    Read the article

  • Android: Issue with acceptable file types via bluetooth

    - by poeschlorn
    Hi guys, i've got a problem with pushing files to my nexus one: It seems to me that there is only a small selection of file types that are accepted by my phone (such like jpg, gif and so on). I recently tried to push other files to my phone (in my case gpx) and my phone has rejected it automatically... is there a way to bypass or extent this filter? Is there also a way to catch those files by a service? greets, poeschlorn

    Read the article

  • Most efficient way of checking for a return from a function call in Perl

    - by Gaurav Dadhania
    I want to add the return value from the function call to an array iff something is returned (not by default, i.e. if I have a return statement in the subroutine.) so I'm using unshift @{$errors}, "HashValidator::$vfunction($hashref)"; but this actually adds the string of the function call to the array. I also tried unshift @{$errors}, $temp if defined my $temp = "HashValidator::$vfunction($hashref)"; with the same result. What would a perl one-liner look like that does this efficiently (I know I can do the ugly, multi-line check but I want to learn). Thanks,

    Read the article

  • How can I test whether a csv file is currently open and being written to - the file, not a file hand

    - by Henry
    I've seen a few questions/answers here that deal with this, but none really give me an answer/solution. I've got a Clipper system writing csv files to a Windows directory. I have a perl script running on a Linux server that is reading a mount of that Windows directory and importing the files to a database. Right now we're using flag files to indicate when a csv is no longer being written to; the flag files gets written after the csv is done. I'd really rather just get what I need from the csv itself, but I can't seem to find a way to tell when the file is open and being written to. lsof doesn't seem to answer my need. I've tried using flock and open the file with an exclusive lock, thinking it might throw an error if the file is being modified, but it doesn't. Any thoughts?

    Read the article

  • Is nothing truly ever deleted in git?

    - by allenskd
    I'm currently learning git, usually I'm a bit skeptic of VCS since I have a hard time getting used to them. I deleted a branch called "experimental" with some tmp files, I saw the files removed in my working directory so I scratched my head and wondered if this is normal, can I bring it back in case I need it again, etc. I found the SHA making the commit of the tmp files and recreated the branch with the provided sha and saw it again with all the files and their current content. Everything I do in the working directory can be reverted once I commit it? Might seem like a silly question to many people, but it kinda intrigues me so I want to know the limits

    Read the article

  • Change HttpContext.Request.InputStream

    - by user320478
    I am getting lot of errors for HttpRequestValidationException in my event log. Is it possible to HTMLEncode all the inputs from override of ProcessRequest on web page. I have tried this but it gives context.Request.InputStream.CanWrite == false always. Is there any way to HTMLEncode all the feilds when request is made? public override void ProcessRequest(HttpContext context) { if (context.Request.InputStream.CanRead) { IEnumerator en = HttpContext.Current.Request.Form.GetEnumerator(); while (en.MoveNext()) { //Response.Write(Server.HtmlEncode(en.Current + " = " + //HttpContext.Current.Request.Form[(string)en.Current])); } long nLen = context.Request.InputStream.Length; if (nLen > 0) { string strInputStream = string.Empty; context.Request.InputStream.Position = 0; byte[] bytes = new byte[nLen]; context.Request.InputStream.Read(bytes, 0, Convert.ToInt32(nLen)); strInputStream = Encoding.Default.GetString(bytes); if (!string.IsNullOrEmpty(strInputStream)) { List<string> stream = strInputStream.Split('&').ToList<string>(); Dictionary<int, string> data = new Dictionary<int, string>(); if (stream != null && stream.Count > 0) { int index = 0; foreach (string str in stream) { if (str.Length > 3 && str.Substring(0, 3) == "txt") { string textBoxData = str; string temp = Server.HtmlEncode(str); //stream[index] = temp; data.Add(index, temp); index++; } } if (data.Count > 0) { List<string> streamNew = stream; foreach (KeyValuePair<int, string> kvp in data) { streamNew[kvp.Key] = kvp.Value; } string newStream = string.Join("", streamNew.ToArray()); byte[] bytesNew = Encoding.Default.GetBytes(newStream); if (context.Request.InputStream.CanWrite) { context.Request.InputStream.Flush(); context.Request.InputStream.Position = 0; context.Request.InputStream.Write(bytesNew, 0, bytesNew.Length); //Request.InputStream.Close(); //Request.InputStream.Dispose(); } } } } } } base.ProcessRequest(context); }

    Read the article

  • Including File Paths in Ocra?

    - by Ruby Novice
    I'm trying to figure out if there is a way to include file paths in ocra and still have the application make an exe. I have it set up so that the .exe generated by ocra would sit on the same directory level as another folder. That folder is named 'Place Files Here', and the program simply performs regex commands on text files in the 'Place Files Here' folder. I can run ocra without errors if I use only Dir.getwd, but if I try to add directory = Dir.getwd + '/Place Files Here' it won't run. Any help would be greatly appreciated. Thanks!

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Deleting a node in a circular linked list c++?

    - by angad Soni
    I was wondering if anyone could help me understand if this code for deleting a node from a circular linked list would work, or if there is something i'm missing out on. using c++ to code. void circularList::deleteNode(int x) { node *current; node *temp; current = this-start; while(current->next != this->start) { if(current->next->value == x) { temp = current->next; current->next = current->next->next; delete current->next; } } }

    Read the article

  • How can I change standart input/output of another program?

    - by Azad Salahli
    I have a console application written in c++. It simply reads an integer from standard input(keyboard) and writes another integer to standard output(screen). Now I want to give some tests to that program and check its answers using another program. In another words, I want to write electron judge for that program. I want that program(which I want to test) to read from file and write to file without changing source code. How can I do that. I tried assigning input&output to files before executing c++ program, but it did not worked. assign(input,'temp.in'); reset(input); assign(output,'temp.out'); rewrite(output); exec('domino.exe'); close(input); close(output);

    Read the article

  • Why does File::Find finished short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system.

    Read the article

  • Android Assets No Value Read?

    - by BahaiResearch.com
    AssetManager assets = myContext.getAssets(); String[] files = assets.list("MyFolder"); InputStream myInput = assets.open("MyFolder/" + files[0]); int i = myInput.read(); in this case 'i' is -1 meaning nothing read. Why would nothing be there if the file is there, the variable 'files' has the file as well. Do I need to do anything to the file I put into the Assets folder in get it to be readable?

    Read the article

  • Information sent in an HTML form

    - by dedalo
    Hi, I've a HTML file that shows to the user the contents of a database (it is shown as a table). The user can choose one of the rows.When this is done the selection made by the user is sent to a servlet that will work with that information. Imagine that this servlet is going to look for files related to the information chosen by the user. What I'd like to do is to provide the user with the option of also choosing the number of files that are going to be looked for by the servlet. That way the user should be able to choose one of the rows shown in the table and should also be able of typing the numers of files that are to be looked for. So far I'm able to send to the servlet what the user chooses in the table, but I'd like to know if it is possible to attach to this information the number of files requested. Thanks

    Read the article

  • Python programming. Accessing Windows rigth click menu options

    - by Zack
    I'm hoping to automate a few tasks at work. One of them being combining and converting power point files to PDFs. I'm a bit of a newbie (I just finished Magus Heitland's Beginning Python), so I'm not entirely sure what I'm specifically asking. On windows, one can select multiple files, right click, and select combine as adobe PDF. I've figured out the 'grouping' of the files I want to convert (I traverse the dir and nest the files inside of a list based on their names), but I'm unsure how to pursue the next step (the rightclick/combine command). Googling has led me to things like win32api, pywinauto, and ctypes. But as I read over what they do my newbieness prevents me from knowing which is the tool I need. Could any one suggest a few good resources or tips?

    Read the article

  • How do I add a folder/group to SCM (CVS) in Xcode?

    - by Brad
    I have an iPhone project in Xcode which is checked into CVS via XCode's SCM support. I have created a new folder in this project, and created a group for it's files. However, the files in this group/folder are not in CVS, and I cannot figure out how to get them in there. The usual "Add to repository" under the "SCM" menu is always grayed-out when I try to select one of the files - I would assume this is because the folder is not in CVS. How do I add the folder/files to CVS?

    Read the article

  • Better to combine & minify javascript or use Google CDN?

    - by jessegavin
    I am building a site which currently uses javascript from several sources: Group 1: Google Maps API v3 (hosted by Google) Group 2: jQuery & swfobject (hosted on Google CDN) Group 3: Several jQuery plugins and non-jquery javascript files (hosted on my server) I am using Justin Etheredge's tool SquishIt to combine and minify all the javascript files that are hosted on my server (group 3). I am wondering if the site would 'feel' faster to users if I were to host the files in (group 2) locally so that they can be combined with all the other files in (group 3) and requiring only one HTTP request for groups 2 & 3. This would mean that I don't get the benefits of the Google CDN however. Does anyone have any advice on this matter?

    Read the article

  • How can I tell SVN that a file was renamed by another tool?

    - by detly
    I am using Subversion with some files that are managed by another application. I would like to rename the files, but the rename must be done from within the application so that it correctly updates its own metadata. This means that I cannot simply use SVN's rename command — the application's metadata will become inconsistent. So how can I can I tell SVN that the files in the working copy have been renamed by this application, so that it preserves history in the same way as per the rename command?

    Read the article

  • Implement a threading to prevent UI block on a bug in an async function

    - by Marcx
    I think I ran up againt a bug in an async function... Precisely the getDirectoryListingAsync() of the File class... This method is supposted to return an object containing the lists of files in a specified folder. I found that calling this method on a direcory with a lot of files (in my tests more than 20k files), after few seconds there is a block on the UI until the process is completed... I think that this method is separated in two main block: 1) get the list of files 2) create the array with the details of the files The point 1 seems to be async (for a few second the ui is responsive), then when the process pass from point 1 to point 2 the block of the UI occurs until the complete event is dispathed... Here's some (simple) code: private function checkFiles(dir:File):void { if (dir.exists) { dir.addEventListener( FileListEvent.DIRECTORY_LISTING, listaImmaginiLocale); dir.getDirectoryListingAsync(); // after this point, for the firsts seconds the UI respond well (point 1), // few seconds later (point 2) the UI is frozen } } private function listaImmaginiLocale( event:FileListEvent ):void { // from this point on the UI is responsive again... } Actually in my projects there are some function that perform an heavy cpu usage and to prevent the UI block I implemented a simple function that after some iteration will wait giving time to UI to be refreshed. private var maxIteration:int = 150000; private function sampleFunct(offset:int = 0) :void { if (offset < maxIteration) { // do something // call the recursive function using a timeout.. // if the offset in multiple by 1000 the function will wait 15 millisec, // otherwise it will be called immediately // 1000 is a random number for the pourpose of this example, but I usually change the // value based on how much heavy is the function itself... setTimeout(function():void{aaa(++offset);}, (offset%1000?15:0)); } } Using this method I got a good responsive UI without afflicting performance... I'd like to implement it into the getDirectoryListingAsync method but I don't know if it's possibile how can I do it where is the file to edit or extend.. Any suggestion???

    Read the article

  • PHP include doesn't work

    - by Chris
    I'm not sure how simple is this to solve, but I assume I'm doing something wrong. I'm new to PHP, so bear with me, please. When I started learning PHP, I always placed all my project files into the same folder along with index.php and thus included everything like this: <?php include('./translation.php'); ?> Later on in the process of learning as I gained experience and my skill increased, I had to start using folders and place my files into sub folders. I ended up successfully including my files with the following: <?php include('../translation.php'); ?> My trouble-free coding took an unexpected turn when I decided to start using sub-sub folders. After placing all the files even deeper into the file structure I was shocked to find out that I cannot include them anymore, using: <?php include('.../translation.php'); ?> Now I'm lost. What did I do wrong? Am I to understand that I cannot include files deeper than 2 directories in the project? Should I start using a different file system?

    Read the article

  • Restoring web reference in Visual Studio 2008

    - by Mark Cheeseborough
    I had a web reference set in my VS2008 ASP.NET project, but due to some source control weirdness it is no longer listed in the project. I have the set of files in the Web References folder under my project. There's a .wsdl, .disco and several .datasource files. Is there any way to re-add this web reference through the existing files rather than using the "Add Web Reference" dialog?

    Read the article

  • Boost singleton and undefined reference

    - by Ockonal
    Hello, I globally use singleton pattern in my project. To make it easier - boost::singleton. Current project uses Ogre3d library for rendering. Here is some class: class GraphicSystem : public singleton<GraphicSystem> { private: Ogre::RenderWindow *mWindow; public: Ogre::RenderWindow *getWindow() const { return mWindow; } }; In GraphicSystem constructor I fill the mWindow value: mWindow = mRoot->createRenderWindow(...); I cheked it, everything makes normally. So, now I have to use handler for the window in input system (to get window handle). Somewhere else in another class: Ogre::RenderWindow *temp = GraphicSystem::get_mutable_instance().getWindow(); GraphicSystem::get_mutable_instance().getWindow()->getCustomAttribute("WINDOW", &mWindowHandle); temp is 0x00, and there is segfault at last line (getting custon attribute). I can't understand, why does singleton returns undefined pointer for the window. All another singleton-based classes work well.

    Read the article

  • why gcc emits segmentation ,after my code run

    - by gcc
    every time ,why have I encountered with segmentation fault ? still,I have not found my fault sometimes, gcc emits segmentation fault ,sometimes, memory satck limit is exceeded I think you will understand (easily) what is for that code void my_main() { char *tutar[50],tempc; int i=0,temp,g=0,a=0,j=0,d=1,current=0,command=0,command2=0; do{ tutar[i]=malloc(sizeof(int)); scanf("%s",tutar[i]); temp=*tutar[i]; } while(temp=='x'); i=0; num_arrays=atoi(tutar[i]); i=1; tempc=*tutar[i]; if(tempc!='x' && tempc!='d' && tempc!='a' && tempc!='j') { current=1; arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int));} i=1; current=1; while(1) { tempc=*tutar[i]; if(tempc=='x') break; if(tempc=='n') { ++current; arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int)); ++i; continue; } if(tempc=='d') { ++i; command=atoi(tutar[i])-1; free(arrays[command]); free(l_arrays[command]); free(c_arrays[command]); ++i; continue; } if(tempc=='j') { ++i; current=atoi(tutar[i])-1; if(arrays[current]==NULL) { arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int)); ++i; } else { a=l_arrays[current]; j=l_arrays[current]; ++i;} continue; } } }

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >