Search Results

Search found 37650 results on 1506 pages for 'files'.

Page 451/1506 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • looking for tools/tips to repair FileMaker v6.0 database corruption

    - by Bradford
    My wife uses an application which stores persistent information in files that were created by FileMaker v6. The files recently became corrupted, and I'm trying to find a way to recover whatever I can. Generally, I'm looking for any tool that would let me programmatically query the database file, without dishing out the money for full version. If anyone knows of a tool that can do this, please let me know!

    Read the article

  • How to log nginx vhost bandwidth?

    - by bwizzy
    I'm looking for a way to be able to track the bandwidth of multiple vhosts on an nginx web server. I'm guessing there is a way that I can set up the log files to output this information and then I can write a script to parse through the log files and add up the file sizes. If that is the case does anyone know the correct log format, and if there is already a script out there that does this?

    Read the article

  • Recover deleted data

    - by atapimp24
    Hi, A user deleted his documents from his laptop somehow and has no backup available. How would one go on his way to recover these deleted files. I have zero experience on this issue. Are there any open source or freeware tools that I can use to attempt a recovery of these files. Thanks

    Read the article

  • How do I get the Mac OS X 'quick look' feature to be more programmer-friendly?

    - by Lee
    There are numerous text files that are always included in common downloads such as rails plugins: LICENSE, ChangeLog, Rakefile, etc. I know these files are plain-text, but Mac OS X refuses to acknowledge this automatically. If I hit the spacebar in Finder to activate "quick look", the icon becomes huge but the contents of the file are not shown, presumably because they have no file extension. How do I stop this madness so I can quickly look at READMEs just by hitting the spacebar? I've already got a ton of text editors installed on my mac: this question is purely about efficiency and making simple files accessible as quickly as possible.

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • NFS host is not exporting the "share"

    - by user1345260
    I have a NFS Server: usanfsd01 And a remote machine: usafssd01 I tried mounting a directory from usafssd01 onto usanfsd01 by adding the following line to /etc/fstab as root usanfsdo1:/home/dblogs /home/data/dblogs nfs rw 0 0 And when I run the following command to see if NFS is exporting the share, it's not shown showmount -e usanfsdo1 Can someone please help? Also, a point of interest would be there is another mount that works on the same servers and thats defined as below in the /etc/fstab usanfsdo1:/home/files /home/data/files nfs rw 0 0 /etc/exports /nfs/home/dblogs 'IPADDRESS'(rw,no_root_squash)

    Read the article

  • forfiles.exe scripting

    - by PHLiGHT
    I'm looking to automatically delete files older than 7 days old with forfiles. The code below works when I manually do it it and respond yes to deleting the files. How can I incorporate the yes into this? This is the output E:\Documents and Settings\Administratorforfiles -p "H:\SHARED\Sca ns" -s -m . -d -7 -c "cmd /c del @path" Could Not Find H:\SHARED\Scans.DS_Store H:\SHARED\Scans\XXX\DOC006.XSM*, Are you sure (Y/N)?

    Read the article

  • Unable to edit and save a file on remote machine using notepad++

    - by gsk
    I am using Notepad++ 5.3.1. I want to edit and save files on a remote machine(both are running on windows xp). I have granted the access privileges and security permissions in the folder containing the files. When I try to save any of these files after editing in Notepad ++, I get the following error. 'Please check whether if this file is opened in another program' There are no other places where the file has been opened but I still get this error. I was able to edit and save earlier but this error seems to be coming only since yesterday.

    Read the article

  • Function declaration in C and C++

    - by Happy Mittal
    I have two C++ files, say file1.cpp and file2.cpp as //file1.cpp #include<cstdio> void fun(int i) { printf("%d\n",i); } //file2.cpp void fun(double); int main() { fun(5); } When I compile them and link them as c++ files, I get an error "undefined reference to fun(double)". But when I do this as C files, I don't get error and 0 is printed instead of 5. Please explain the reason. Moreover I want to ask whether we need to declare a function before defining it because I haven't declared it in file1.cpp but no error comes in compilation.

    Read the article

  • Making your own shortcuts on Windows

    - by Mohammad
    Hello all, I'm using Windows Vista, and I was wondering if I can put shortcuts for applications.Like for example, if I press a certain combination of keys I get notepad, or something like that. If those shortcuts were already there, how can I know them? Another thing, if I have an application that runs on media files for example, can I add it on the " Right Click" menu when I right click on media files. I hope you got that :) Thanks alot :)

    Read the article

  • How do I use functions from a DLL?

    - by Russel
    How do I use functions from a DLL? I'm a total newbie and I don't really understand how to use functions from a DLL file. I'm trying to use MS Visual Studio 2008 (C++). My understanding is that the DLL files will have corresponding header files and as long as I include the header files and call the functions normally in my code, it should work? Is that correct? Then I would just need to have the compiled exe file be able to find the DLL? Please let me know if that is a remotely correct understanding! Thanks! Russel

    Read the article

  • .htaccess not loading up on CentOS

    - by matt
    For some reason none of my .htaccess files are loading. What are the common things that make this happen? I can use in httpd.conf, and create it there, but it doesn't read .htaccess files in my directories.

    Read the article

  • Git - will the file moves be detected?

    - by Ben Aston
    I performed some modifications on a branch (A). I then decided to create a brand new branch (B) based on the state of my existing working copy and commit and push to that. There were a number of files that had been moved during my earlier refactoring, and hence were now not included in version control having been moved directly in the filesystem. By accident I did not add these files to git before committing and pushing to the new branch (B). If I now add these files and commit and push, will Git be able to detect the file move operations?

    Read the article

  • How to maintain status of gridview checkbox

    - by Royson
    Hi, On my form on left side i have tree-view with check-boxes and on right side grid-view with check-box column. Treeview shows all folders. if user clicked on treenode their files all displayed in gridview. If user checked some files and selects another node and return back it should display checked files as it is. How do i maintain grid-view checked-box column status..? One more query If i checked specific node it should checked all rows of a gridview. Thanks. I am using VS-2008 2.0

    Read the article

  • Force Windows 7 to store thumbnails locally

    - by kotekzot
    I want Windows 7 to store thumbnails cache files in the same folder as the files (thumbs.db) instead of using the centralized location for all thumbnails (By default %userprofile%\AppData\Local\Microsoft\Windows\Explorer). How would one achieve this effect? Alternatively, if the former is implausible, I'd settle for no thumbnail caching at all, forcing Windows to regenerate thumbnails each time a folder is accessed.

    Read the article

  • Setting ownership/permissions for symfony2 and other web projects

    - by Handonam
    I've been very confused as to how to set permissions and user/groups for my sites. It is particularly one of my weakest suits My curent problem is that I often find myself running into a situation where if i view a particular page, it won't have permissions to write to cache or logs. At this point I'll set the ownership towards apache. Then, in other cases, if i try to run internal scripts, for example, I can't write to these cache/log files because i set them for apache. Currently, my symfony2 files are all registered to me as a part of staff (Handonam:Staff). I've seen various people creating groups such as www-data, apache, etc, and using users such as theirselves (e.g. Handonam) or www as a part of those groups. So my question is: For symfony2 and other web projects, what's generally the best setup for user/group setup so that both apache and myself can interact with these files, while maintaining decent security?

    Read the article

  • how to know if a video file can be played on a dvd player

    - by user23950
    Is there an application that can emulate a dvd player? I've converted a .mp4 video using allok video converter. And choose the output format to be xVid(.avi)<--that is exactly what is written on the application. I don't want to waste a blank dvd and try if it really can be played. So if you have tried this before please tell me if it works. And I have tried burning .avi files and it works because they are genuine avi files which I did not convert

    Read the article

  • Can you iterate over chunks() with request.POST in Django?

    - by Sebastian
    I'm trying to optimize a site I'm building with Django/Flash and am having a problem using Django's iterate over chunks() feature. I'm sending an image from Flash to Django using request.POST data rather than through a form (using request.FILES). The problem I foresee is that if there is large user volume, I could potentially kill memory. But it seems that Django only allows iterating over chunks with request.FILES. Is there a way to: 1) wrap my request.POST data into a request.FILES (thus spoofing Django) or 2) use chunks() with request.POST data

    Read the article

  • Rebuilding an Opensource Iphone Project but got too many errors

    - by user482473
    Downloaded an opensource sample Iphone project and when I compiled, everything is OK. This project has alot of C files. So, I try to rebuilding this project from scratch just to see if I can duplicate it. I make sure all the inlcudes files there, the path is correct, all the libraries there. I check original project settings and make sure if they have any specific search path and I need to follow. Anyway, I follow the exact settings but this time my duplicated project got some 1500 errors. alot of them generated from inside those C files. So, what's wrong? everyone got some tips on that why I got so many errors? I followed everything I follow from the original project.

    Read the article

  • filter log file by defining regexes

    - by fmpdmb
    I have some HUGE log files (50Mb; ~500K lines) I need to start filtering some of the crap out of. The log files are being produced using log4j and have the basic pattern of: [log-level] date-time class etc, etc log-message I'm looking for a way that I can identify a regex start and regex end (or something similar) that will filter out the matching entries from the file so I can more easily wade through these massive files. I'm sure I could write a java program to accomplish this task, but I thought I'd ask the community before going down that path. Thanks in advance.

    Read the article

  • How to slow down Uploadify plugin script voor jQuery

    - by Peter
    Now if that isn't a weird question, I don't know what is. But here is the problem: I have a function in my "onSelect" option that has to collect some data (through AJAX) and I have a function in my "onComplete" option that processes the just uploaded files based on the data "onSelect" collected. However, with very small files, the "onSelect" hasn't finished yet before the upload is complete and the "onComplete" fails because it lacks the necessary data. Bigger files work just fine. So, I'm looking for a way to stall the upload of the file. To let it start only after the function in onSelect completed fetching the necessary data. Any ideas? Here is an example of the function that the onSelect triggers: var foo = new Array(); function checkDB( event, queueID , fileObj ) { $.post( ajax_folder + 'fetchData' , { ref: "someValue" } , function( data ){ foo[ queueID ] = data.result; } , 'json' ); return true; }

    Read the article

  • Hudson + gitolite + virtual host on staging server

    - by takeshin
    I have a Ubuntu server which I want to be my continous integration server (for the Zend Application based projects) and the staging server as well. The team is pushing source files to the repository: /home/git/repositories/testing.git Then Hudson does the build, and the master branch is exported (maybe cloned is a better word) by git hudson plugin to: /var/lib/hudson/jobs/test/workspace/ The workspace contains .git folder as well, which is not necessary on my staging website. How do you set up virtual host to see the staging version of the content of the repository? Does the virtual host point to the workspace, or shall I export the files to another directory? What about the permissions and security? Hudson is the owner of all the workspace files. Do I have to do some post-build actions to set up access? P.S. If this question is more apropriate to serverfault, please migrate.

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >