Search Results

Search found 72722 results on 2909 pages for 'file processing'.

Page 110/2909 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • Increase the size of a memory mapped file

    - by sandun dhammika
    I am maintaning a memory mapped file to store my tree like datastructure. When I'm updating the datastructure ,I got this problem. The file is limited on it's size and can't be too long or too small. I have a methods like void mapfile_insert_record(RECORD* /* record*/); void mapfile_modify_record(RECORD* /* record*/); Both operations could lead to exceed the space which is free on memory file. How do I overcome this? What strategy I should use. calculate whether it requires to exceed the file as a pre-condition on both methods. Dynamically exceed it , for a example manage a timer and constantly polling file for it's free avaliable size and then automatically extend it. Any ideas or patterns to overcome this problem?

    Read the article

  • How to Search File Contents in Windows Server 2008 R2

    - by ybbest
    By default, windows search only search by File name. To configure windows search to search by contents you need configure the following: You need to make sure Windows Search Services feature is activated.(Check this article for details) Then, configure Windows Search by Open file explorer: Press Alt button –> go to tools –> Folder options –> search tab –> Here select, “Always search file names and content(this might take several minutes)” Press okay. Now your searches will work for file content like the good old days of XP. Another way to search the contents in file without Search configuration is to Type “contents:” in the Windows Explorer search box followed by the word, searches text files. This is a search filter which seems to be undocumented?

    Read the article

  • Default file permissions for php user www-data

    - by John Isaacks
    I have a php installed on my ubuntu machine. The web root is /var/www I set the permissions for this folder like so: sudo chown -R ftpuser:www-data /var/www ftpuser is the user I set up so I can ftp to /var/www from another machine on the network. www-data is the user php uses. I double checked using whoami from php. Whenever I ftp upload a new file to the machine the group has no permissions to the file. So when I try to access it in my browser via machine-name/new-file.php I am told permission denied and I have to go and chmod the new file. I am wondering if there is a way I can default the www-data user/group to have access permissions to new files so I don't have to keep chmod every new file?

    Read the article

  • How can I add the version of a file to the file name with Tortoise-SVN?

    - by Eric Belair
    I would like to start giving unique names to "cache-able" files - i.e. *.css and *.js - in order to prevent caching, without requiring changes to the web-server settings (as is currently done in IIS). For instance, let's I have a JavaScript file called global.js. Going forward I would like it to have the name global.123.js when revision 123 is checked in. This would also require the following: The previous version of the file - perhaps it was global.115.js - is removed when the file is deployed. All references to the file are updated with the new file name How do I go about doing this? What concerns do I need to consider?

    Read the article

  • Replacing LF, NEL line endings in text file with CR+LF

    - by Tomas Lycken
    I have a text file with a strange character encoding that I'd like to convert to standard UTF-8. I have managed to get part of the way: $ file myfile.txt myfile.txt: Non-ISO extended-ASCII text, with LF, NEL line endings $ iconv -f ascii -t utf-8 myfile.txt > myfile.txt.utf8 $ file myfile.txt.utf8 myfile.txt.utf8: UTF-8 Unicode text, with LF, NEL line endings ## edit myfile.txt.utf8 using nano, to fix failed character conversions (mostly åäö) $ file myfile.txt.utf8 myfile.txt.utf8: UTF-8 Unicode text, with LF, NEL line endings However, I can't figure out how to convert the line endings. How do I do to replace LF+NEL with CR+LF (or whatever is the standard)? When I'm done, I'd like to see the following: $ file myfile.txt myfile.txt: UTF-8 Unicode text

    Read the article

  • Framework Folders and Duplicate File Names

    - by Kevin Smith
    I have been working with Framework folders a little bit in the past few days and found one unexpected behavior that is different from Contribution Folders (Folders_g). If you try and check a file into a Framework Folder that already exists in the folder it will allow it and rename the file for you. In Folders_g this would have generated an error and prevented you from checking in the file. A quick check of the Framework Folder configuration settings in the Application Administrator’s Guide for Content Server does not show a configuration parameter to control this. I'm still thinking about this and not sure if I like this new behavior or not. I guess from a user perspective this more closely aligns Framework Folders to how Windows handle duplicate file names, but if you are migrating from Folders_g and expect a duplicate file name to be rejected, this might cause you some problems.

    Read the article

  • Is there a quick way to convert camera raw files to DNG?

    - by dericke
    Before I switched from Windows to Ubuntu for my daily computing, I used Adobe's Photoshop Lightroom for processing photos from my DSLR. Adobe made it really straightforward to convert from proprietary camera raw files (in my case, .NEF) to DNG. I haven't found any way to convert NEF to DNG in Ubuntu yet. Most photography programs do process NEFs to JPG/TIFF/PNG/etc., but I'm looking for a converter for archival purposes. Are there any tools available, either standalone or built into another app, that can losslessly convert from NEF to DNG?

    Read the article

  • Having trouble's understanding NIF model file format?

    - by NoobScratcher
    I'm attempting too develop a 3rd party application to make it easy to import 3d model part's into my mod for skyrim the plan was to have a fileviewer and preview window of the nif model but since , I don't know what the NIF file format actually is or where to get the vertex data from it or the hole nine yards of parsing a text file in detail I'm at a lost what to do. I'm very good at C++ but not at this super over complicated file formats , id much prefer .obj over the nif file format specification here -- http://niftools.sourceforge.net/doc/nif/index.html If someone could help me in understanding the file format in a natural and simple way and the exact parsing needed to create the 3D Model in the frustum and a explanation on how you figured that out would be happy to know. I use cygwin , notepad++ , win32 7

    Read the article

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • WinRAR extracting file before checking password? [closed]

    - by opatachibueze
    I tried extracting an encrypted rar file today, and I discovered that I had to wait the same amount of time I'll wait before a file is extracted (extraction reaches 99% completion) for WinRAR to conclude it's the wrong password (winrar message: "CRC failed wrong password or corrupt file?") . My guess is that this file is somewhere on the Computer just before the detection, - it has to be and then gets deleted after it's verified that the password is not the same? Is there anyway I can forcefully get this file from the PC? Thanks.

    Read the article

  • Run two shell file with thread

    - by user1149157
    How i can run two file shell in parallel and do not shared the same jvm. may be i use thread but how i run two file shell bu two thread ? File 1: #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # param="parameter1" another="parameter2" for ((i = 10; i >= 0; i -= 1)) do echo "run my file with param another " done File 2 : #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # a="101" b="400" c="500" echo "run my programme with a b c "

    Read the article

  • Making a sldprt to PDB file converter?

    - by user122083
    I wanted to create a parser that can read a solidworks file and turn it into a protein data bank file. This has already been done in a program called DiamondCAD. http://www.zyvex.com/Research/DiamondCAD.html I waant to make a parser that can parse the data and then visualize it the same way as DiamondCAD. I have downloaded and opened solidworks files before and they make no sense to me with never before seen symbols and looks like ancient writing. Does anyone know how a sldprt. file is structured and how it can be parsed into a PDB file? (A software called VMD converts a PDB to Obj. file so it is proof of concept)

    Read the article

  • Read from file hexadecimal number and change their representation style

    - by user576844
    I want to write a program changing the notation of all hexadecimal numbers found in an assembly source file from traditional (h) to C-style (0x). I have started the coding part but am not sure how can I detect the hexadecimal numbers and eventually change the style and save it back in the file... I have started writing the program.. ## Mips program - .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkhexa #check for hexadecimal numbers addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra Any help would be appreciated.

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • Cropping image with ImageScience

    - by fl00r
    ImageScience is cool and light. I am using it in my sinatra app. But I can't understand how can I crop image with not square form and how can I make thumbnail with two dimensions. As I found on ImageScience site: ImageScience.with_image(file) do |img| img.cropped_thumbnail(100) do |thumb| thumb.save "#{file}_cropped.png" end img.thumbnail(100) do |thumb| thumb.save "#{file}_thumb.png" end img.resize(100, 150) do |img2| img2.save "#{file}_resize.png" end end I can crop thumb and resize thumb only with ONE dimension but I want to use two, as in RMagick. For example I want to crop 100x200px box from image, or I want to make thumbnail with width or height not bigger then 300 (width) or 500 (height) pixels.

    Read the article

  • C# Combining lines

    - by Mike
    Hey everybody, this is what I have going on. I have two text files. Umm lets call one A.txt and B.txt. A.txt is a config file that contains a bunch of folder names, only 1 listing per folder. B.txt is a directory listing that contains folders names and sizes. But B contains a bunch of listing not just 1 entry. What I need is if B, contains A. Take all lines in B that contain A and write it out as A|B|B|B ect.... So example: A.txt: Apple Orange Pear XBSj HEROE B.txt: Apple|3123123 Apple|3434 Orange|99999999 Orange|1234544 Pear|11 Pear|12 XBSJ|43949 XBSJ|43933 Result.txt: Apple|3123123|3434 Orange|99999999|1234544 Pear|11|12 XBSJ|43949|43933 This is what I had but it's not really doing what I needed. string[] combineconfig = File.ReadAllLines(@"C:\a.txt"); foreach (string ccline in combineconfig) { string[] readlines = File.ReadAllLines(@"C:\b.txt"); if (readlines.Contains(ccline)) { foreach (string rdlines in readlines) { string[] pslines = rdlines.Split('|'); File.AppendAllText(@"C:\result.txt", ccline + '|' + pslines[0]); } } I know realize it's not going to find the first "if" because it reads the entire line and cant find it. But i still believe my output file will not contain what I need.

    Read the article

  • Java Servlet framework that does things like rencoding images to preferred format etc.

    - by mP
    Are there any frameworks/libraries that provide servlets/filters etc that handle reencoding on the fly of images. interpret the accept headers and output the file, reencoding into the new format if necessary by checking the actual format of the original image file. provide a low and high quality version of an image. re encode an image into new dimensions. width and height parameters might query string parameters. I could create versions of the file in all the formats, at upload time but the seems overkill. I would rather lazily create the rencoded file and stick it in a cache if it gets served again etc.

    Read the article

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • File association and msconfig broken. Malware?

    - by Moshe
    A friend of mine had an Acer laptop. It has Vista Home Basic. I can't get system properties open. Msconfig does not run. Also, exe filetype is asking me what program to run it with. How can I fix that? I'm running AVG now. Assuming nothing shows up, what are fixes to the above mentioned issues?

    Read the article

  • Torque jobs does not enter "E" state (unless "qrun")

    - by Vi.
    Jobs I add to the queue stays there in "Queued" state without attempts to be executed (unless I manually qrun them) /var/spool/torque/server_logs say just 04/11/2011 12:43:27;0100;PBS_Server;Job;16.localhost;enqueuing into batch, state 1 hop 1 04/11/2011 12:43:27;0008;PBS_Server;Job;16.localhost;Job Queued at request of test@localhost, owner = test@localhost, job name = Qqq, queue = batch The job requires just 1 CPU on 1 node. # qmgr -c "list queue batch" Queue batch queue_type = Execution total_jobs = 0 state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0 Exiting:0 max_running = 3 acl_host_enable = True acl_hosts = localhost resources_min.ncpus = 1 resources_min.nodect = 1 resources_default.ncpus = 1 resources_default.nodes = 1 resources_default.walltime = 00:00:10 mtime = Mon Apr 11 12:07:10 2011 resources_assigned.ncpus = 0 resources_assigned.nodect = 0 kill_delay = 3 enabled = True started = True I can't set resources_assigned to nonzero because of Cannot set attribute, read only or insufficient permission resources_assigned.ncpus. When I qrun some task, this goes to mom's log: 04/11/2011 21:27:48;0001; pbs_mom;Svr;pbs_mom;LOG_DEBUG::mom_checkpoint_job_has_checkpoint, FALSE 04/11/2011 21:27:48;0001; pbs_mom;Job;TMomFinalizeJob3;job 18.localhost started, pid = 28592 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;scan_for_terminated: job 18.localhost task 1 terminated, sid=28592 04/11/2011 21:27:48;0008; pbs_mom;Job;18.localhost;job was terminated 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;top of preobit_reply 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;DIS_reply_read/decode_DIS_replySvr worked, top of while loop 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;in while loop, no error from job stat 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;obit sent to server Scheduler log (/var/spool/torque/sched_logs/20110705): 07/05/2011 21:44:53;0002; pbs_sched;Svr;Log;Log opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;TokenAct;Account file /var/spool/torque/sched_priv/accounting/20110705 opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;main;/usr/sbin/pbs_sched startup pid 16234 qstat -f: Job Id: 26.localhost Job_Name = qwe Job_Owner = test@localhost job_state = Q queue = batch server = localhost Checkpoint = u ctime = Tue Jul 5 21:43:31 2011 Error_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.e26 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = a mtime = Tue Jul 5 21:43:31 2011 Output_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.o26 Priority = 0 qtime = Tue Jul 5 21:43:31 2011 Rerunable = True Resource_List.ncpus = 1 Resource_List.neednodes = 1:ppn=1 Resource_List.nodect = 1 Resource_List.nodes = 1:ppn=1 Resource_List.walltime = 00:01:00 substate = 10 Variable_List = PBS_O_HOME=/home/test,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=test, PBS_O_PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games, PBS_O_MAIL=/var/mail/test,PBS_O_SHELL=/bin/sh,PBS_SERVER=127.0.0.1, PBS_O_WORKDIR=/home/test/jscfi/default/0.738784810485275, PBS_O_QUEUE=batch,PBS_O_HOST=localhost euser = test egroup = test queue_rank = 1 queue_type = E etime = Tue Jul 5 21:43:31 2011 submit_args = run.pbs Walltime.Remaining = 6 fault_tolerant = False How to make it execute jobs automatically, without manual qrun?

    Read the article

  • Nautilus cannot move to trash

    - by amorfis
    Thing takes place on ubuntu. I want to move a file to trash. I am not the owner of the file, but file belongs to root:samba, and I am member of samba group, and file permissions are rwxrw-r-- There is message "Cannot move file to trash, do you want to delete immediately?". Nothing more. Why can't I move it to trash?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >