Search Results

Search found 18385 results on 736 pages for 'flash memory'.

Page 332/736 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Creating a Dib by only specifying the size with GDI+ and DotNet...

    - by Kris Erickson
    I have just recently discovered the difference between different constructors in GDI+. Going: var bmp = new Bitmap(width, height, pixelFormat); creates a DDB (Device Dependent Bitmap) whereas: var bmp = new Bitmap(someFile); creates a DIB (Device Independent Bitmap). This is really not usually important, except when handling very large images (where a DDB will run out of memory, and run out of memory at different sizes depending on the machine and its video memory). I need to create a DIB rather than DDB, but specify the height, width and pixelformat. Does anyone know how to do this in DotNet. Also is there a guide to what type of Bitmap (DIB or DDB) is being created by which Bitmap constructor?

    Read the article

  • Caveats to be aware of when using threading in Python?

    - by knorv
    I'm quite new to threading in Python and have a couple of beginner questions. When starting more than say fifty threads using the Python threading module I start getting MemoryError. The threads themselves are very slim and not very memory hungry, so it seems like it is the overhead of the threading that causes the memory issues. Is there something I can do to increase the memory capacity or otherwise make Python allow for a larger number of threads? What is the maximum number of threads you've been able to run in your Python code using the threading module? Did you do any tricks to achieve that number? Are there any other caveats to be aware of when using the threading module?

    Read the article

  • java increase xmx dynamically at runtime

    - by Tomer
    Hi, I have a jvm server in my machine, now I want to have 2 apservers of mine sitting in same machine, however I want the standby one to have a really low amount of memory allocated with xmx because its passive, one the main server (active) goes down I want to allocate more memory to my passive server which is already up without restarting it (I have have them both having too much xmx - note they would consume memory at startup and I cant allow possibility of outOfMemory). So I want passive - low xmx once active goes down I want my passive to receive much more xmx. is there a way for me to achieve that. Thanks

    Read the article

  • malloc hangs in Linux

    - by Rahul
    I am using Linux on a 16 G with 2 quad core CPU. There are 8 processes which are doing some work (CPU intensive/network i/o). Out of which 4 have a memory leak (These are test conditions so no problem in having leaks here). Total space is occupied by all processes is around 15.4 G only 200 MB is free in system. Things are fine for some hours. But after that malloc hangs (for a process which doesn't have a memory leak). Its stuck for for more than 4 minutes (Note CPU is not 100% but io has gone up signficantly). Now there is no problem in the hanged process. (It has not corrupted the memory). What malloc is doing? (is it tryibg to defragment or builidng up swap space).I am using SUSE 10. Any pointers?

    Read the article

  • Objective-c pointer assignment and reassignment dilema

    - by moshe
    Hi, If I do this: 1 NSMutableArray *near = [[NSMutableArray alloc] init]; 2 NSMutableArray *all = [[NSMutableArray alloc] init]; 3 NSMutableArray *current = near; 4 current = all; What happens to near? At line 3, am I setting current to point to the same address as near so that I now have two variables pointing to the same place in memory, or am I setting current to point to the location of near in memory such that I now have this structure: current - near - NSMutableArray The obvious difference would be the value of near at line 4. If the former is happening, near is untouched and still points to its initial place in memory. If the latter is happening,

    Read the article

  • A programming language for teaching data structures and algorithms with? [closed]

    - by Andreas Grech
    Possible Duplicate: Choice of programming language for learning data structures and algorithms Teachers have different opinions on what programming language they would choose to teach data structures and algorithms with. Some would prefer a lower level language such as C because it allows the student to learn more about what goes on beyond the abstractions in terms of memory allocation and deallocation and pointers and pointer arithmetic. On the other hand, others would say that they would prefer a higher level language like Java because it allows the student to learn more about the concepts of the structures and the algorithm design rather than 'waste time' and fiddle around with memory segmentation faults and all the blunders that come with languages where memory management is manual. What is your take on this issue? And also, please post any references you may know of that also discuss this argument.

    Read the article

  • Where does the compiler store methods for C++ classes?

    - by Mashmagar
    This is more a curiosity than anything else... Suppose I have a C++ class Kitty as follows: class Kitty { void Meow() { //Do stuff } } Does the compiler place the code for Meow() in every instance of Kitty? Obviously repeating the same code everywhere requires more memory. But on the other hand, branching to a relative location in nearby memory requires fewer assembly instructions than branching to an absolute location in memory on modern processors, so this is potentially faster. I suppose this is an implementation detail, so different compilers may perform differently. Keep in mind, I'm not considering static or virtual methods here.

    Read the article

  • C++ Singleton design pattern.

    - by Artem Barger
    Recently I've bumped into realization/implementation of Singleton design pattern for C++. It has looked in the following way (I have adopted it from real life example): // a lot of methods is omitted here class Singleton { public: static Singleton* getInstance( ); ~Singleton( ); private: Singleton( ); static Singleton* instance; }; From this declaration I can deduce that instance field is initiated on the heap, that means there is a memory allocation. That is completely unclear for me is when does exactly memory is going to be deallocated? Or there is a bug and memory leak? It seems like there is a problem in implementation. PS. And main question how to implement it in the right way?

    Read the article

  • "Access violation reading location" troubles retrieveing buffer from directx

    - by numerical25
    Below is my code... ID3D10Texture2D *pBackBuffer; hr = mpSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*) &pBackBuffer); and I get the following error chp1.exe': Unloaded 'C:\Windows\SysWOW64\oleaut32.dll' First-chance exception at 0x757ce124 in chp1.exe: Microsoft C++ exception: _com_error at memory location 0x0018eeb0.. First-chance exception at 0x757ce124 in chp1.exe: Microsoft C++ exception: _com_error at memory location 0x0018edd0.. First-chance exception at 0x757ce124 in chp1.exe: Microsoft C++ exception: _com_error at memory location 0x0018ef1c.. The thread 'Win32 Thread' (0xfc4) has exited with code 0 (0x0). 'chp1.exe': Unloaded 'C:\Windows\SysWOW64\D3D10Ref.DLL' First-chance exception at 0x00b71894 in chp1.exe: 0xC0000005: Access violation reading location 0x00000000. Unhandled ex ception at 0x00b71894 in chp1.exe: 0xC0000005: Access violation reading location 0x00000000. It appears that the error occurs in the last parameter. &pBackBuffer. I added this single line of code and the error occurs.

    Read the article

  • c++ programming for clusters and HPC

    - by Abruzzo Forte e Gentile
    HI All I need to write a scientific application in C++ doing a lot of computations and using a lot of memory. I have part of the job but due to high requirements in terms of resources I was thinking to start moving to OpenMPI. Before doing that I have a simple curiosity: If I understood the principle of OpenMPI is the developer that has the task of splitting the jobs over different nodes calling SEND and RECEIVE based on node available at that time. Do you know if it does exist some library or OS or whatever that has this capability letting my code reamain as it is now? Basically something that connects all computers and let share as one their memory and CPU? I am a bit confused because of the high material available on the topic. Should I look at cloud computing? or Distributed Shared Memory? Can you help me or address me a bit? Thanks

    Read the article

  • STL deque accessing by index is O(1)?

    - by jasonline
    I've read that accessing elements by position index can be done in constant time in a STL deque. As far as I know, elements in a deque may be stored in several non-contiguous locations, eliminating safe access through pointer arithmetic. For example: abc-defghi-jkl-mnop The elements of the deque above consists of a single character. The set of characters in one group indicate it is allocated in contiguous memory (e.g. abc is in a single block of memory, defhi is located in another block of memory, etc.). Can anyone explain how accessing by position index can be done in constant time, especially if the element to be accessed is in the second block? Or does a deque have a pointer to the group of blocks? Update: Or is there any other common implementation for a deque?

    Read the article

  • Can I compile and execute C# expression without saving the assembly to disk?

    - by Sasha
    I can compile, get an instance and invoke a method of any C# type programmaticaly. There lots of info on that, including the StackOverflow (http://stackoverflow.com/questions/53844/how-can-i-evaluate-a-c-expression-dynamically). My problem is that I'm in the web environment and cannot save anything to /bin directory. I can compile "in-memory" as the above mentioned link suggests but then I won't be able to "unload" my custom assembly from the current AppDomain. After a while that will become a huge memory problem. Is it possible to open a new AppDomain, compile new assembly "in-memory", evaluate some expression or access some member of that assembly inside of that new AppDomain and kill that AppDomain safely when done, all that without saving anything to a hard drive? Thanks in advance for any links, suggestions, etc.

    Read the article

  • Session Timeout and page response time

    - by Johnny5
    Hi, I'm load testing an asp.net app. The load test is simulating 500 user doing searchs on the site and browsing the results. I'm observing that the more I reduce the session timeout limit (in web.config) the better the page response time. For exemple, with a timeout at 10 minutes, I got an average response time of 8.35 seconds. With a timout at 3 minutes, the average response time for the same page is 3,98 seconds. The session in stored "InProc". I supposed the memory used by the "no more used but still actives" sessions may be in cause. But, even if there is more memory used when the timeout is at 10, there is still plenty of memory available (about 2.7Gb). Any ideas?

    Read the article

  • Access to bytes array of a Bitmap

    - by Deulis
    1- In Windows CE, I have a Bitmap object in C#. 2- I have a C function in an extern dll that expects as parameters the pointer to a bytes array that represents an image in RGB565 format, width and height. This function will draw on this array of bytes. So I need to pass the byte array pointer of the Bitmap object, but I can find a practical way to get this pointer. One way is convert this Bitmap into a bytes array using a memory stream or something else, but it will create a new bytes array, so I will keep in memory both object, the Bitmap and the bytes array, but I don’t want it because the few available memory, that’s why I need to access to the bytes array of the bitmap object, not create a new bytes array. Anyone can help me?

    Read the article

  • Is it possible to load a file full of binary data into GDB when GDB is debugging a core file?

    - by efunneko
    I am debugging a crash using GDB and a core file. A large portion of the memory space is mmapped into the process. That portion of the memory is not saved into the core file. I have a file that contains all the data in that mmapped memory. I would like to find a way to load the data from that file into GDB at a certain offset so that I can display datastructures within that address space. Is this possible? Note that I have tried the 'restore' command in GDB and it does not work when debugging a core file. Perhaps there are tools that allow a core file to have additional data appended to it?

    Read the article

  • static effect on python

    - by fatai
    how we can construct static effect on python instead of using class and global ? not like that one : global a a = [] #simple ex ; fonk ( a , b , d) x = 1 a.append ( x) EDIT: I want to create temporary memory , if I exit the function namely fonk , I want to save change as list on temporary memory . We can do that demand only put static keyword in front of data type but in python , we dont have static, so I want that effect in python . Therefore , how can I do ? As above code say "a" represents temporary memory

    Read the article

  • C++ Singleton design pattern

    - by Artem Barger
    Recently I've bumped into a realization/implementation of the Singleton design pattern for C++. It has looked like this (I have adopted it from the real life example): // a lot of methods are omitted here class Singleton { public: static Singleton* getInstance( ); ~Singleton( ); private: Singleton( ); static Singleton* instance; }; From this declaration I can deduce that the instance field is initiated on the heap. That means there is a memory allocation. What is completely unclear for me is when exactly the memory is going to be deallocated? Or is there a bug and memory leak? It seems like there is a problem in the implementation. My main question is, how do I implement it in the right way?

    Read the article

  • How is the implicit segment register of a near pointer determined?

    - by Daniel Trebbien
    In section 4.3 of Intel 64® and IA-32 Architectures Software Developer's Manual. Volume 1: Basic Architecture, it says: A near pointer is a 32-bit offset ... within a segment. Near pointers are used for all memory references in a flat memory model or for references in a segmented model where the identity of the segment being accessed is implied. This leads me to wondering: how is the implied segment register determined? I know that (%eip) and displaced (%eip) (e.g. -4(%eip)) addresses use %cs by default, and that (%esp) and displaced (%esp) addresses use %ss, but what about (%eax), (%edx), (%edi), (%ebp) etc., and can the implicit segment register depend also on the instruction that the memory address operand appears in?

    Read the article

  • Using sizeof with a dynamically allocated array

    - by robUK
    Hello, gcc 4.4.1 c89 I have the following code snippet: #include <stdlib.h> #include <stdio.h> char *buffer = malloc(10240); /* Check for memory error */ if(!buffer) { fprintf(stderr, "Memory error\n"); return 1; } printf("sizeof(buffer) [ %d ]\n", sizeof(buffer)); However, the sizeof(buffer) always prints 4. I know that a char* is only 4 bytes. However, I have allocated the memory for 10kb. So shouldn't the size be 10240? I am wondering am I thinking right here? Many thanks for any suggestions,

    Read the article

  • Secure way to run other people code (sandbox) on my server?

    - by amikazmi
    I want to make a web service that run other people code locally... Naturally, I want to limit their code access to certain "sandbox" directory, and that they wont be able to connect to other parts of my server (DB, main webserver, etc) Whats the best way to do it? Run VMware/Virtualbox: (+) I guess it's as secure as it gets.. even if someone manage to "hack".. they only hack the guest machine (+) can limit the cpu & memory the process uses (+) easy to setup.. just create the VM (-) harder to "connect" the sandbox directory from the host to the guest (-) wasting extra memory and cpu for managing the VM Run underprivileged user: (+) doesnt waste extra resources (+) sandbox directory is just a plain directory (?) cant limit cpu and memory? (?) dont know if it's secure enough... Any other way? Server running Fedora Core 8, the "other" codes written in Java & C++

    Read the article

  • Can I play any Buffer only once at a given time?

    - by mystify
    From the OpenAL documentation: The basic OpenAL objects are a Listener, a Source, and a Buffer. There can be a large number of Buffers, which contain audio data. Each buffer can be attached to one or more Sources My problem is, that I have one sound file which I need to play multiple times per second, at the same time. The sound is 2 seconds long. So it will overlap. Would I need multiple filled buffers for this (= multiple times that sound in memory)? If I would attach one Buffer to multiple Sources, would I be able to play the sound 10 times, overlapping itself, with just one copy in memory? Or would I still have to deal with 10 copies of that sound in memory?

    Read the article

  • Why hashCode() returns the same value for a object in all consecutive executions?

    - by Vijay Shanker
    Hi, I am trying some code around object equality in java. As I have read somewhere hashCode() is a number which is generated by applying the hash function. Hash Function can be different for each object but can also be same. At the object level, it returns the memory address of the object. Now, I have sample program, which I run 10 times, consecutively. Every time i run the program I get the same value as hash code. If hashCode() function returns the memory location for the object, how come the java(JVM) store the object at same memory address in the consecutive runs? Can you please give me some insight and your view over this issue?

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Django app that can provide user friendly, multiple / mass file upload functionality to other apps

    - by hopla
    Hi, I'm going to be honest: this is a question I asked on the Django-Users mailinglist last week. Since I didn't get any replies there yet, I'm reposting it on Stack Overflow in the hope that it gets more attention here. I want to create an app that makes it easy to do user friendly, multiple / mass file upload in your own apps. With user friendly I mean upload like Gmail, Flickr, ... where the user can select multiple files at once in the browse file dialog. The files are then uploaded sequentially or in parallel and a nice overview of the selected files is shown on the page with a progress bar next to them. A 'Cancel' upload button is also a possible option. All that niceness is usually solved by using a Flash object. Complete solutions are out there for the client side, like: SWFUpload http://swfupload.org/ , FancyUpload http://digitarald.de/project/fancyupload/ , YUI 2 Uploader http://developer.yahoo.com/yui/uploader/ and probably many more. Ofcourse the trick is getting those solutions integrated in your project. Especially in a framework like Django, double so if you want it to be reusable. So, I have a few ideas, but I'm neither an expert on Django nor on Flash based upload solutions. I'll share my ideas here in the hope of getting some feedback from more knowledgeable and experienced people. (Or even just some 'I want this too!' replies :) ) You will notice that I make a few assumptions: this is to keep the (initial) scope of the application under control. These assumptions are of course debatable: All right, my idea's so far: If you want to mass upload multiple files, you are going to have a model to contain each file in. I.e. the model will contain one FileField or one ImageField. Models with multiple (but ofcourse finite) amount of FileFields/ ImageFields are not in need of easy mass uploading imho: if you have a model with 100 FileFields you are doing something wrong :) Examples where you would want my envisioned kind of mass upload: An app that has just one model 'Brochure' with a file field, a title field (dynamically created from the filename) and a date_added field. A photo gallery app with models 'Gallery' and 'Photo'. You pick a Gallery to add pictures to, upload the pictures and new Photo objects are created and foreign keys set to the chosen Gallery. It would be nice to be able to configure or extend the app for your favorite Flash upload solution. We can pick one of the three above as a default, but implement the app so that people can easily add additional implementations (kinda like Django can use multiple databases). Let it be agnostic to any particular client side solution. If we need to pick one to start with, maybe pick the one with the smallest footprint? (smallest download of client side stuff) The Flash based solutions asynchronously (and either sequentially or in parallel) POST the files to a url. I suggest that url to be local to our generic app (so it's the same for every app where you use our app in). That url will go to a view provided by our generic app. The view will do the following: create a new model instance, add the file, OPTIONALLY DO EXTRA STUFF and save the instance. DO EXTRA STUFF is code that the app that uses our app wants to run. It doesn't have to provide any extra code, if the model has just a FileField/ImageField the standard view code will do the job. But most app will want to do extra stuff I think, like filling in the other fields: title, date_added, foreignkeys, manytomany, ... I have not yet thought about a mechanism for DO EXTRA STUFF. Just wrapping the generic app view came to mind, but that is not developer friendly, since you would have to write your own url pattern and your own view. Then you have to tell the Flash solutions to use a new url etc... I think something like signals could be used here? Forms/Admin: I'm still very sketchy on how all this could best be integrated in the Admin or generic Django forms/widgets/... (and this is were my lack of Django experience shows): In the case of the Gallery/Photo app: You could provide a mass Photo upload widget on the Gallery detail form. But what if the Gallery instance is not saved yet? The file upload view won't be able to set the foreignkeys on the Photo instances. I see that the auth app, when you create a user, first asks for username and password and only then provides you with a bigger form to fill in emailadres, pick roles etc. We could do something like that. In the case of an app with just one model: How do you provide a form in the Django admin to do your mass upload? You can't do it with the detail form of your model, that's just for one model instance. There's probably dozens more questions that need to be answered before I can even start on this app. So please tell me what you think! Give me input! What do you like? What not? What would you do different? Is this idea solid? Where is it not? Thank you!

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >