Search Results

Search found 70390 results on 2816 pages for 'file upload'.

Page 11/2816 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Joomla , forms with upload and custom field from inside the administration panel

    - by Stathis
    I want a plugin for joomla like jforms or chronoforms in order to make a form to upload videos along with other custom fields to db and manage them. The only problem is I want this functionality to be made from inside the administrator console and not to appear on a page at my site's frontend. My site does not have a login service , so I need to make the admin able to login to administration panel and from there to upload and manage videos. Do you know of a plugin wich supports this functionality? Thank you in advance.

    Read the article

  • upload multiple images, display thumbnails, manipulate image

    - by robert
    hy, i need a little help here i want to be able to upload multiple images after i upload all i want to display thumbnails, when i click on a thumb i want to be able to rotate the image (rotate the original image not only the thumb) All uploaded images i want to be in one php array, in the order they have been uploaded. Or if i can to change the order of the images, i know is possible with jQuery! how i can code this?? i started but i can't get this done! here is my project so far: http://www.mediafire.com/?3uzzgx5onzn any help is welcome! Thanks!

    Read the article

  • Upload Image to Facebook Objective-C

    - by boopyman
    I'm currently trying to upload an image from my Mac application to Facebook. To do this, I'd like for the user to simply input his username and password, and to click a button. The only issue is, Facebook doesn't actually have an API for the Mac, it only has one for iOS. This shouldn't be a problem, except for the fact that to login, you must use a web view, something I'm not to keen on doing, since I'd like the interface to be two simple text fields. I've also looked into PHFacebook, a class I found online, but it also seems to utilize an NSWebView. I'm wondering if there's a security issue when you use text fields; indeed, it's slightly strange no available API offers this function ! So, to conclude, is it possible, or is there an API, that lets you upload an image and lets you provide the user's credentials through simple NSStrings?

    Read the article

  • Android: writing a file to sdcard

    - by Sumit M Asok
    I'm trying to write a file from an Http post reply to a file on the sdcard. Everything works fine until the byte array of data is retrieved. I've tried setting WRITE_EXTERNAL_STORAGE permission in the manifest and tried many different combinations of tutorials I found on the net. All I could find was using the openFileOutput("",MODE_WORLD_READABLE) method, of the activity but how my app writes file is by using a thread. Specifically, a thread is invoked from another thread when a file has to be written, so giving an activity object didn't work even though I tried it. The app has come a long way and I cannot change how the app is currently written. Please, someone help me? CODE: File file = new File(bgdmanip.savLocation); FileOutputStream filecon = null; filecon = new FileOutputStream(file); // bgdmanip.savLocation holds the whole files path byte[] myByte; myByte = Base64Coder.decode(seReply); Log.d("myBytes", String.valueOf(myByte)); bos.write(myByte); filecon.write(myByte); myvals = x * 11024; seReply is a string reply from HttpPost response. the second set of code is looped with reference to x. the file is created but remains 0 bytes

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

  • How to add an SSH user to my Ubuntu 12 server to upload PHP files

    - by user229209
    I have an Ubuntu 12 VPS and wanted to create a user account to upload and download my PHP code. So when logged in as root I created a user "chris" and then created a directory /var/www/chris I want "chris" to be able to upload and run files to the /var/www/chris directory. Permissions for the chris dir look like this: drwxrwxr-x 2 root chris 4096 Aug 20 03:35 chris As root I created a sample file called abc.php and put it in the chris dir. It worked fine when I test it in a browser. I logged in as chris and uploaded a file called 1234.php. That did not work. I just got a blank PHP page. The code was identical in both files. So it is not the code. The permissions now look like this: -rw-r--r-- 1 root chris 59 Aug 20 03:34 1234.php -rw-r--r-- 1 root root 49 Aug 20 03:21 abc.php How do I alow the "chris" user to upload files and get them to work?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Multiple file descriptors to the same file, C

    - by Gigi
    I have a multithreaded application that is opening and reading the same file (not writing). I am opening a different file descriptor for each thread (but they all point to the same file). Each thread then reads the file and may close it and open it again if EOF is reached. Is this ok? If I perform fclose() on a file descriptor does it affect the other file descritptors that point to the same file?

    Read the article

  • Uploadify refuses to upload WMV, FLV and MP4 files - SOLVED

    - by Jon Winstanley
    The uploadify plugin for JQuery seems very good and works for most file types. However, it allows me to upload all file types apart from the ones I need! Namely .WMV, .FLV and .MP4 Uploads of any other type work. I have already tried changing the fileExt parameter and also tried removing it altogether. I have testing in Google Chrome, IE7 and Firefox and none work for these file types. I have a ton of local projects already and uploading is not an issue on any other project, I even use the same example files (This is the first time I have used Uploadify) Is there a known reason for this behaviour? EDIT: Have found the issue. I had forgotten to add my usual .htaccess file to the example project.

    Read the article

  • FTP zip upload and unpack

    - by DR.GEWA
    Hi Alsways uploading made web-sites , projects, I want to make such thing make zip file, upload one file and then extract with default CHMOD for folders lets say 755 and for files 664 With Cpanel hostings its OK, I can do it via file manager... But for hostings without I can't. Baybe someone can give a hint how...????

    Read the article

  • Detecting file upload size on the client side?

    - by DisgruntledGoat
    I'm using PHP for file uploads. In the PHP manual it shows an example using a MAX_FILE_SIZE hidden field, saying that it will detect on the client side (i.e. the browser) whether the file is too large or not. I've just tried the example in Firefox, Chrome and IE and it doesn't work. The file is always uploaded, even if it is way larger than the specified hidden field. Incidentally, if the file is larger than MAX_FILE_SIZE then calling move_uploaded_file doesn't work, so it seems the variable is having an effect server-side, but not client-side.

    Read the article

  • Splitting a file before upload?

    - by Yevgeniy Brikman
    On a webpage, is it possible to split large files into chunks before the file is uploaded to the server? For example, split a 10MB file into 1MB chunks, and upload one chunk at a time while showing a progress bar? It sounds like JavaScript doesn't have any file manipulation abilities, but what about Flash and Java applets? This would need to work in IE6+, Firefox and Chrome. Update: forgot to mention that (a) we are using Grails and (b) this needs to run over https.

    Read the article

  • showing error on uploading a big file using php

    - by user1489969
    I have created a php code to display the upload option to upload multiple files as below: <?php $f_id= $_GET["id"]; ?> <title>Upload File</title> <form enctype="multipart/form-data" method="post" action="upload_hal_mult.php?id=<?php echo $f_id;?>" > <input type="hidden" name="MAX_FILE_SIZE" value="10000000"> <input id="infile" type="file" name="infile[]" multiple="true" /> <input type="submit" value="upload" name="file_uploaded" / > <br> <br> </form> So this will call "upload_hal_mult.php" when "upload" button is clicked. And the code for that is as follows: <title>Upload Results</title> <?php define("MAX_SIZE",10000000); $f_id= $_GET["id"]; $dir_name="dir_hal_".$f_id; $u=0; if (!is_dir($dir_name)) mkdir($dir_name); $dir=$dir_name."/"; $file_realname = $_FILES['infile']['name']; for ($i = 0; $i < count($_FILES['infile']['name']); $i++) { $ext = substr(strrchr($_FILES['infile']['name'][$i], "."), 1); $fname = substr($_FILES['infile']['name'][$i],0,strpos($_FILES['infile']['name'][$i], ".")); $fPath = $fname."_(".substr(md5(rand() * time()),0,4).")".".$ext"; echo "files size=".$_FILES["infile"]["size"][$i]."\n"; if($_FILES["infile"]["size"][$i]>MAX_SIZE) echo('File uploaded exceeds maximum upload size.'); if(($_FILES['infile']['error'][i]==0) && move_uploaded_file($_FILES['infile']['tmp_name'][$i], $dir . $fPath)) { $u=$u+1; ?> <!--<script type="text/javascript">setTimeout("window.close();", 1300);</script>--> <?php echo "Upload is successful\n"; } else echo "if stmt failed so error \n"; } if($u!=count($_FILES['infile']['name'])) echo "Error"; else echo "count is correct"; ?> This upload works correctly for files of size<10MB. But for files of size10MB, it's not echoing 'File uploaded exceeds maximum upload size.' as expected. Its also not uploading the file of size10MB. But the $u gets incremented. But none of the statements like "Upload is successful" or "if stmt failed so error" are being echoed as well. However the statement "count is correct" is being displayed, and this shows the $u got incremented somehow even though the echo statements didnt work! Can someone please point out the error I am doing here? I thought its simply a matter of 'if/else' statements but it seems more than that to me. Please help me out if you have any clue. Thanks

    Read the article

  • Upload files from website

    - by Andrew
    All I want to do is allow the user to browse through their folders to look for a file, select it and then press submit which saves the file to my server as well as the path to the saved file. How would someone do this? (Some sort of tutorial website would help a lot)

    Read the article

  • File system implementation in MongoDB with GridFS

    - by Ralph
    I am working on two projects that will both implement a Webdav server backed by a MongoDB GridFS. In each case, there is the potential for the system to store tens of millions of files spread across thousands of hierarchical directories. I can come up with two different ways of storing the directory structure: As a "true" hierarchical file system, with directories containing the IDs (_id) of subdirectories and regular files. The paths will be separated by slashes (/) as in a POSIX-compliant file system. The path /a/b/c will be represented as a directory a containing a directory b containing a file c. As a flat file system, where file names include the slashes. The path /a/b/c will be stored as a single file with the name /a/b/c What are the advantages and disadvantages of each, with respect to a "real" folder-based file system?

    Read the article

  • How to install PHP-FPM and PHP on Ubuntu?

    - by Sanoj
    I have problems with installing PHP and in Ubuntu. I followed the instructions on the PHP-FPM site, PHP FastCGI Process Manager but when doing ../configure && make to compile PHP I got a lot of not found messages (listed below), and I don't know how to fix them. I tried both the Integrated compilation and Separate compilation but both compilations ends up with the same messages. Is there a solution or workaround? An alternativ way to install PHP with PHP-FPM? ../configure: 11986: ac_fn_c_check_func: not found ../configure: 11997: ac_fn_c_check_func: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: checking for socket in -lsocket: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: checking for socket in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12147: ac_fn_c_try_link: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: result: no: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: no: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: checking for __socket in -lsocket: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: checking for __socket in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12147: ac_fn_c_try_link: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: result: no: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: no: not found ../configure: 12154: ac_fn_c_check_func: not found ../configure: 12165: ac_fn_c_check_func: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: checking for socketpair in -lsocket: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: checking for socketpair in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12315: ac_fn_c_try_link: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: result: no: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: no: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: checking for __socketpair in -lsocket: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: checking for __socketpair in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12315: ac_fn_c_try_link: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: result: no: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: no: not found ../configure: 12322: ac_fn_c_check_func: not found ../configure: 12333: ac_fn_c_check_func: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: checking for htonl in -lsocket: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: checking for htonl in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12483: ac_fn_c_try_link: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: result: no: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: no: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: checking for __htonl in -lsocket: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: checking for __htonl in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12483: ac_fn_c_try_link: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: result: no: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: no: not found ../configure: 12490: ac_fn_c_check_func: not found ../configure: 12501: ac_fn_c_check_func: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: checking for gethostname in -lnsl: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: checking for gethostname in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12651: ac_fn_c_try_link: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: result: no: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: no: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: checking for __gethostname in -lnsl: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: checking for __gethostname in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12651: ac_fn_c_try_link: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: result: no: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: no: not found ../configure: 12658: ac_fn_c_check_func: not found ../configure: 12669: ac_fn_c_check_func: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: checking for gethostbyaddr in -lnsl: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: checking for gethostbyaddr in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12819: ac_fn_c_try_link: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: result: no: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: no: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: checking for __gethostbyaddr in -lnsl: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: checking for __gethostbyaddr in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12819: ac_fn_c_try_link: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: result: no: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: no: not found ../configure: 12826: ac_fn_c_check_func: not found ../configure: 12837: ac_fn_c_check_func: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: checking for yp_get_default_domain in -lnsl: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: checking for yp_get_default_domain in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12987: ac_fn_c_try_link: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: result: no: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: no: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: checking for __yp_get_default_domain in -lnsl: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: checking for __yp_get_default_domain in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12987: ac_fn_c_try_link: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: result: no: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: no: not found ../configure: 12995: ac_fn_c_check_func: not found ../configure: 13006: ac_fn_c_check_func: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: checking for dlopen in -ldl: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: checking for dlopen in -ldl... : not found cat: confdefs.h: No such file or directory ../configure: 13156: ac_fn_c_try_link: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: result: no: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: no: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: checking for __dlopen in -ldl: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: checking for __dlopen in -ldl... : not found cat: confdefs.h: No such file or directory ../configure: 13156: ac_fn_c_try_link: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: result: no: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: no: not found ../configure: 13164: 5: Bad file descriptor ../configure: 13164: :: checking for sin in -lm: not found ../configure: 13164: 6: Bad file descriptor ../configure: 13164: checking for sin in -lm... : not found cat: confdefs.h: No such file or directory ../configure: 13196: ac_fn_c_try_link: not found ../configure: 13198: 5: Bad file descriptor ../configure: 13198: :: result: no: not found ../configure: 13198: 6: Bad file descriptor ../configure: 13198: no: not found ../configure: 13214: ac_fn_c_check_func: not found ../configure: 13225: ac_fn_c_check_func: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for inet_aton in -lresolv: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for inet_aton in -lresolv... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for __inet_aton in -lresolv: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for __inet_aton in -lresolv... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for inet_aton in -lbind: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for inet_aton in -lbind... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for __inet_aton in -lbind: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for __inet_aton in -lbind... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13516: 5: Bad file descriptor ../configure: 13516: :: checking for ANSI C header files: not found ../configure: 13516: 6: Bad file descriptor ../configure: 13516: checking for ANSI C header files... : not found cat: confdefs.h: No such file or directory ../configure: 13615: ac_fn_c_try_compile: not found ../configure: 13617: 5: Bad file descriptor ../configure: 13617: :: result: no: not found ../configure: 13617: 6: Bad file descriptor ../configure: 13617: no: not found ../configure: 13665: ac_cv_header_dirent_dirent.h: not found ../configure: 13665: 5: Bad file descriptor ../configure: 13665: :: checking for dirent.h that defines DIR: not found ../configure: 13665: 6: Bad file descriptor ../configure: 13665: checking for dirent.h that defines DIR... : not found eval: 1: Bad substitution

    Read the article

  • File Sharing: User-created folders are read-only to others on Mac 10.6 Server

    - by Anriëtte Combrink
    Hi there We recently got a new Mac Mini Server with 10.6 Server on it. It has two 500GB volumes, one of which [Macintosh HD2 the extra one other than the boot disk] we are using to share our work files. I have added a user account for each user in the Users pane on Server Preferences, and all our staff (users added to the system) are added to a new group, called toolboxstaff. Now, when a user creates a new folder on this volume, folders are created with read-only access for everyone else besides the owner. How do I set it that when a user creates a folder, it creates it with RW access for the toolboxstaff group? Thanks in advance.

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Nautilus and file command in 11.04 don't show metadata for WebM files

    - by Pili
    The file-name extension .webm is used for media files using the WebM multimedia format, which consists of the WebM container (a subset of the Matroska container) and audio and video streams with independet enconding and quality settings. Description of the issue: For files in the WebM format, the program file says that files are raw data, instead of determining and displaying the real file-format, which is WebM. Besides, Nautilus doesn't display the technical metadata of files in this format. Why is the file program not displaying the file format for WebM files?

    Read the article

  • How to add extensions to a lot of files using content of each file?

    - by v8media
    I've got over 10,000 files that don't have extensions from older versions of the Mac OS. They're extremely nested, and they also have all sorts of strange formatting and characters. They don't have file types or creator codes attached to them any longer. A great deal of these files have text in the file that will let me determine extensions (for example Word.Document.8 is in every file created by that version of Word, and Excel.Sheet.8 in every file created with that version of Excel). I found a script that looks like it would work for one of these file types at a time, but it erases parts of filenames after nefarious characters, which is not good. find . -type f -not -name "." -print0 |\ xargs -0 file |\ grep 'Word.Document.8' |\ sed 's/:.*//' |\ xargs -I % echo mv % %.doc So, two questions from that: One is, should I clean the characters in the filenames first, or programmatically deal with those in the script in order to leave them the same? As long as I lose no information from the filenames, I don't see a problem cleaning out slashes and other problem characters. Also, if I clean the filenames, there are likely to be duplicates, so any cleaning script would have to add something like "-1" before the extension to make sure nothing gets lost. 2nd question is how do I change the script so that it will look for more than one file type at the same time and give each the proper extension? I'm not tied to this script, but it is understandable, which is a pro. Mac OS X 10.6 is installed on this file server, but I've got access to any recent versions of OS X. Thanks, Ian

    Read the article

  • Python file-io code listing current folder path instead of the specified

    - by Tom Brito
    I have the code: import os import sys fileList = os.listdir(sys.argv[1]) for file in fileList: if os.path.isfile(file): print "File >> " + os.path.abspath(file) else: print "Dir >> " + os.path.abspath(file) Located in my music folder ("/home/tom/Music") When I call it with: python test.py "/tmp" I expected it to list my "/tmp" files and folders with the full path. But it printed lines like: Dir >> /home/tom/Music/seahorse-gw2jNn Dir >> /home/tom/Music/FlashXX9kV847 Dir >> /home/tom/Music/pulse-DcIEoxW5h2gz This is, the correct file names, but the wrong path (and this files are not either in my Music folder).. What's wrong with this code?

    Read the article

  • How to know file type without extension

    - by Ayusman
    While trying to come-up with a servlet based application to read files and manipulate them (image type conversion) here is a question that came up to me: Is it possible to inspect a file content and know the filetype? Is there a standard that specifies that each file MUST provide some type of marker in their content so that the application will not have to rely on the file extension constraints? Consider an application scenario: I am creating an application that will be able to convert different file formats to a set of output formats. Say user uploads an PDF, my application can suggest that the possible conversion formats are microsoft word or TIFF or JPEG etc. As my application will gradually support different file formats (over a period of time), I want my application to inspect the input file instead of having the user to specify the format. And suggest to user the possible formats of output. I understand this is an open ended, broad question. Please let me know if it needs to be modified. Thanks, Ayusman

    Read the article

  • Read a file to multiple array byte[]

    - by hankol
    I have an encryption algorithm (AES) that accepts file converted to array byte and encrypt it. Since I am going to process a very big size files, the JVM may go out of memory. I am planing to read the files in multiple array byte. each containing some part of the file. Then I teratively feed the algorithm. Finally merge them to produce encrypted file. So my question is: there any way to read a file part by part to multiple array byte? I thought I can use the following to read the file to array byte: IOUtils.toByteArray(InputStream input). And then split the array into multiple bytes using: Arrays.copyOfRange(). But I am afraid that the first code that reads file to byte will make the JVM to go out of memory. any suggestion please ? thanks

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >