Search Results

Search found 46487 results on 1860 pages for 'reading files'.

Page 1624/1860 | < Previous Page | 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631  | Next Page >

  • High mysql server load, sar output

    - by eric
    I have a MySQL Server that should be performing better than it seems to be. We're running ubuntu on a Amazon Cluster Compute (cc1.4xlarge) Linux ip-10-0-1-60 3.2.0-25-virtual #40-Ubuntu SMP Wed May 23 22:20:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise I have several output files from sar that i'm not really sure how to interpret. For example, I ran: # Individual block device I/O activities sar -d 1 180 > logs/block_device_io.log & which gave me what looks like really high utilisation of my disk (turns out this block device maps to /dev/xvdh on /var/lib/mysql type ext4 (rw,_netdev) The output from my log: 10:48:59 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:49:00 PM dev202-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev202-32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev202-112 1008.00 31040.00 1416.00 32.20 1.02 1.01 0.89 90.00 10:49:00 PM dev202-80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Am I wrong in thinking this is a problem? I have it above 90% almost the entire time we're seeing slowness. Or does this just mean MySQL is doing what it's supposed to do?

    Read the article

  • 0x0000007b WinXP in VirtualBox with no Admin access on source drive

    - by Ozzah
    I have a physical drive with an installation of WinXP-32 which I have made a clone of using SysInternals disk2vhd. I have no admin rights on this installation. I have tried to boot this VHD in VirtualBox, however it blue screens on 0x0000007b. I have researched this and apparently the cause is that Windows doesn't like the IDE controller changing. I have tried all the available controllers in VirtualBox, but they all produce the same result. There is a Microsoft KB article which describes a method involving loading a .reg file and extracting some sys files from a CAB. This method apparently works well for many people with this problem, however it will not work for me as I don't have admin rights on the WinXP installation. Is there anything I can do in this case? Is there any way of loading the .reg file outside the OS? or perhaps doing a repair using the WinXP CD? Even though I have no admin rights on the source drive installation of Windows, I do obviously have full access to the file system directly on the drive and also in the VHD itself.

    Read the article

  • Need a VM for running a PHP Sandbox

    - by Phani
    I am working on Web application honeypot. It collects PHP files it receives (as part of an RFI attack) and runs them in order to return the result back to the attacker. The aim is to coax the bad guy into going further into his attack. Based on the answers to my SO question, I am looking at using VMs for running the PHP Sandbox. The honeypot itself consists of Python code and will be running in a Linux environment (preferably Ubuntu-like). These are some of the requirements: The VM should be a light weight as possible. We are going to distribute the code around and many people are going to use the VM along with the Python based honeypot. So, the installation and configuration should not be too difficult. The guest system would also be Linux as we are going to distribute the VM image around. It should be possible for the Python code outside to talk to the guest system. It would be passing on the PHP file to the guest system and will get the output result from it. It should be possible to automate the initial configuration of the VM (such as allocation of RAM etc.) I would like to randomize these settings in order to make the sandbox less 'fingerprintable' I have looked at OpenVZ and KVM so far. Are there any other VMs that I might look at? What do you recommend?

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Network corruption - corrupt downloads, corrupt streams, etc.

    - by rfrankel
    I've been having some problems with my home LAN. Downloaded executables won't run, my remote desktop sessions keep getting interrupted due to encryption errors, flash video streams show visible corruption (both Hulu and YouTube), and I've had a couple downloads for which the md5 hashes don't match. The problem has even occurred with a couple images embedded in webpages, though that's rare enough (presumably because images are relatively smaller files). I've had this problem across two Windows machines and a Mac, so it's neither machine-specific nor at the app or OS level. Comcast claims it's nothing to do with them, and my Linksys/Cisco RV016 router is out of warranty, so I have no access to official support. When I log into my router, it shows no error packets or dropped packets received. I plugged a laptop directly into the router and was able to download a 5.5 MB file and verify its MD5 hash, which is not proof that the problem is downstream of the router, but makes it seem quite likely, since I failed to download the same file several times from two desktops (one Mac, one Windows). Could this be a wiring problem? If so, is there any way clever/elegant to determine which wiring is faulty with just software? If I can avoid tracing all the wires throughout my entire house it would make my life quite a bit easier.

    Read the article

  • What is the advantage of not running as root? [closed]

    - by Shmuel Brill
    Possible Duplicate: What's wrong with always being root? All modern brands of Linux highly discourage (or disable) one from running as root instead of a normal user. I do not understand why. As a "normal" user, one could Download a rouge program from the internet. Run it (After all, one isn't root, what can it do). It installs itself in .bashrc or .xinitrc It writes a rouge "sudo" and "su" and adds . to the path Not noticing that . is in path, one runs sudo. The rouge program now has root password and can do anything it wants in the system. Even if 3-6 doesn't happen, the program could still Be part of a botnet. Read all files in the home directory and send them back (mine for SS#, Credit Card numbers, bank account numbers, etc). Send spam. Run a backdoor server to allow an attacker a chance to connect to the machine to determine vulnerabilities. It seems that the whole "permissions" thing (root/non-root) is just to prevent amateur crackers from getting into the system, so the question is: Is there a point in avoiding running as root, and is there a way to protect oneself if one wants to run unsafe code?

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • Failed to open the Parallels networking module

    - by user49204
    I'm running Parallels Desktop 7 on OSX Lion and encounter the issue in the subject on every time I'm trying to launch Parallels VM. The error message contains a hint to restore to network configuration default which does not help. As been advised by some forums to ran the following script: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hypervisor.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hid_hook.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_usb_connect.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_vnic.kext" The output of: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" is: Diagnostics for /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext: Warnings: The booter does not recognize symbolic links; confirm these files/directories aren't needed for startup: /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeDirectory /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeRequirements /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeResources /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeSignature Dependency Resolution Failures: No kexts found for these libraries: com.parallels.kext.prl_hypervisor I've noticed that prl_netbridge is not being loaded (when I'm trying to unload it, I'm notified it is not loaded). Am I doing something wrong? What can be the reason for such behaviour?

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • HLS video segmenting complications. How to create a transport stream with ffmpeg

    - by Agzam
    I have h264 videos, and currently we're using Apple's HTTP Video Streaming tools and mediafilesegmenter to segment these files. What I need to do is to switch to alternative segmenter based on this very popular open-sourced segmenter The problem is that this segmenter does not just take any video, but takes only MPEG-TS videos. So I have to convert my h264 videos to TS first. I can do that with ffmpeg. I'm using this: ffmpeg -i encoded.mp4 -vcodec h264 -i encoded.mp4 -sameq -acodec aac -strict experimental -f mpegts output.ts But this creates fairly larger output. And the reason is that Apple's segmenter keeps the same codec - AVC and the same audio codec - AAC, whereas ffmpeg changes video format to MPEG Video. The question is: can I somehow keep the same AVC video codec and still convert video to a transport stream? So my goal is to keep the same video quality and same video codecs as Apple's medifilesegmenter does. UPD: Okay... it seems that ffmpeg CAN split videos into segments: ffmpeg -i encoded.mp4 -c copy -map 0 -vbsf h264_mp4toannexb -f segment -segment_time 10 -segment_list test.m3u8 -segment_format mpegts segment%d.ts That's still has one problem: it doesn't create http live streaming index file. (-segment_list creates a file with list of segments, but it doesn't look like HLS index). So, you still have to create index file

    Read the article

  • SVN and WebSVN with different users access restriction on multiple repositories on linux

    - by user55658
    and first of all sorry for my english. I've installed an ubuntu server 10.04.1 with apache2, subversion, svn_dav and websvn. (and others services of course, like php5, mysql 5.1, etc). I've configured my svn with multiple repositories, and each one with differents groups and users, like: /var/myrepos/repo1 group: mygroup1 /var/myrepos/repo2 group: mygroup2 /var/myrepos/repo3 user: johndoe With an easy access on svn_dav, works perfectly, ie: http://myserver/svnrepo1 accesibly only for users on mygroup1 with theirs users of linux and passwords of svn. Also works for the other repos with their users and groups. But when i tried with websvn, shows all repos without take care than if user on mygroup1 can view repo2 (that's i dont want do). You can login as any user on mygroup1, mygroup2, or johndoe, and you login into all repositories. I'll try to find a solution and I'll post the news, if anyone can helpme with this I'll preciated so much!!! Thanks for all I show my files: /etc/apache2/mods-available/dav_svn.conf <Location /svnrepo1> DAV svn SVNPath /var/myrepos/repo1 AuthType Basic AuthName "Repositorio Subversion de MD" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> <Location /websvn/> Options FollowSymLinks order allow,deny allow from all AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location>

    Read the article

  • Inexpensive, simple screen recording application for mac

    - by donut
    I am more and more consistently running into the need to create screencasts (record my screen) for clients to show them how to use programs or websites. Up until now I've been using Jing and it's been wonderful. But I would like something that can give me something less annoying than a .swf. A .mov or, best of all, something that plays without fuss on Mac and Windows. Also, the 5-minute limit is annoying, but not show stopping. Basically, I'd like to be able to actually give them the file on a CD or something instead of relying on whatever host I use staying up for eternity. To sum up, here's what I require: Record a portion or all of the screen. Records audio from mic while recording screen. Exports files easily playable on Mac and Windows (requiring Quicktime is okay, but not ideal) Will work on Mac OS 10.5+ Allows recording videos of at least 5 minutes. Text in recorded videos is easily readable when exported. Bonuses points for: Records videos greater than 5 minutes Exported videos will work in Windows Media player without any fuss. I haven't upgraded to Snow Leopard yet but I know it has some screen recording stuff built in but I don't know if it would be sufficient or not. The reason I say, "simple" is because most of the applications I've seen do much more than I need (I mean, Jing is nearly perfect for my needs) and cost more than I would like to spend.

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • How can I view the binary contents of a file natively in Windows 7? (Is it possible.)

    - by Shannon Severance
    I have a file, a little bigger than 500MB, that is causing some problems. I believe the issue is in the end of line (EOL) convention used. I would like to look at the file in its uninterpreted raw form (1) to confirm the EOL convention of the file. How can I view the "binary" of a file using something built in to Windows 7? I would prefer to avoid having to download anything additional. (1) My coworker and I opened the file in text editors, and they show the lines as one would expect. But both text editors will open files with different EOL conventions and interpret them automagically. (TextEdit and Emacs 24.2. For Emacs I had created a second file with just the first 4K bytes using head -c4096 on a linux box and opened that from my windows box. I attempted to use hexl-mode in Emacs, but when I went to hexl-mode and back to text-mode, the contents of the buffer had changed, adding a visible ^M to the end of each line, so I'm not trusting that at the moment. I believe the issue may be in the end of line character(s) used. The editors my coworker and I tried (1) just automagically recognized the end of line convention and showed us lines. And based on other evidence I believe the EOL convention is carriage return only. (2) return only. are able to recognize and To know what is actually in the file, I would like to look at the binary contents of the file, or at least a couple thousand bytes of the file, preferablely in Hex, though I could work with decimal or octal. Just ones an zeros would be pretty rough to look at.

    Read the article

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • Resolving "JBoss Web Console is Accessible to Unauthenticated Remote Users" vulnerability

    - by IAmJeff
    Our security team has determined there is a vulnerability in one of our systems. We are using version JBoss 5.1.0GA on RHEL 5.10. Vulnerability description: JBoss Web Console is Accessible to Unauthenticated Remote Users Yes, this looks familiar. Refer to Question 501417. I do not find the answer there complete. Can someone (or multiple someones) answer Does a newer version of JBoss fix this vulnerability? Are there links describing, in more detail, manual modification of JBoss configuration files to resolve the issue? Are there others options to remediate this vulnerability? Why don't I find the other answer complete? I'm not at all familiar with JBoss, so this answer seems a bit too simple. The web-console.war contains commented-out templates for basic security in its WEB-INF/web.xml as well as commented-out setup for a security domain in WEB-INF/jboss-web.xml. Just uncomment those basic security blocks and restart? Is there anything else I need to include? This seems generic. Do I need to include anything about my environment, such as absolute paths, etc.? Am I making this too complicated?

    Read the article

  • Nagios 403 forbidden, indexes?

    - by Georgi
    installed nagios under freebsd 9, but can't get the right way to be public in browser (from other pc's). I think that the problem is in the indexes or that there is not index file (instead main.php). Apache says that syntax is ok. The permissions of the dir are 777. The logs print Directory index forbidden by Options directive: /usr/local/www/nagios/. This is my configuration: ScriptAlias /nagios/cgi-bin/ /usr/local/www/nagios/cgi-bin/ Alias /nagios /usr/local/www/nagios/ <Directory /usr/local/www/nagios> Options +Indexes FollowSymLinks +ExecCGI AllowOverride Indexes AuthConfig FileInfo Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> <Directory /usr/local/www/nagios/cgi-bin> Options +ExecCGI AllowOverride None Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> I think that the problem is in idexes, maybe? When I remove the options it's public and available but lists the files and says that idnexes are forbidden..

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • DIY NAS - links for Instructions

    - by Kaushik Gopal
    Good folk of SU, I'm planning to build a NAS (Network Access Storage). I'm planning to do it cheap (:read Old PC Config + Open source software). I was looking for good DIY links . Before you shoot this down as a Repost, I'm only looking for good links containing detailed instructions for setting up a NAS. I did a fair bit of searching and found these links (so please suggest others. While these links are great they delve more on the hardware side, i'm looking for more instructions in the software side). For the sake of the interwebs: Ubuntu: http://snarfquest.com/wiki/index.php/Setting_up_a_Home_NAS http://www.smallnetbuilder.com/content/view/27962/77/ http://jonpeck.blogspot.com/2006/11/how-to-configure-80-fileserver-in-45.html FreeNAS http://www.smallbusinesscomputing.com/webmaster/article.php/3719706 http://www.codeproject.com/KB/system/homemade_nas.aspxhttp://www.codeproject.com/KB/system/homemade_nas.aspx There was one at rubbervir.us that everyone points to, but apparently the site has gone down. A couple of other queries: Is Printer/Scanner sharing a possibility with NAS devices? Many talk of torrent support with NAS Devices, a little more light on this? Does this mean, an auto download of torrent through a feed into NAS, or just support for storing Torrent download files onto the NAS(don't see the difference between the latter and a normal file tranfer)

    Read the article

  • Trying to run QEMU with a file as hda

    - by Felix
    I'm trying to run QEMU and use a simple file on the host system as the guest's hard drive. Here's what I attempted so far: $ dd if=/dev/zero of=/home/felix/vm/archlinux.img bs=1MB count=8192 8192+0 records in 8192+0 records out 8192000000 bytes (8.2 GB) copied, 86.6054 s, 94.6 MB/s $ qemu -hda /home/felix/vm/archlinux.img -cdrom archlinux-2009.08-netinstall-i686.iso -boot d Then I try to install Archlinux to that file. It goes pretty well (it's able to format it from what I can tell) until I start installing packages, when I get errors like this: And, of course, everything goes downhill from there (unable to mount the partition, corrupted files, ...). What am I doing wrong? Note: I'm just doing this for entertainment purposes. I don't intend to actually use this on servers or anything. The only use I can think of for this kind of installation would be to actually get an 8GB USB stick and dd that file to it and wham! You have a bootable stick with a fully fledged and customized OS, and without torturing the stick through the installation.

    Read the article

  • Debian's Wordpress with broken plugin path?

    - by Vinícius Ferrão
    I've installed an Wordpress from Debian Wheezy package system and the plugins folder appears to be broken. As stated in the error log files of Apache2: [error] File does not exist: /var/lib/wordpress/wp-content/plugins/var The plugins are looking for an URL based on the full path, and not on the relative path. I can "temporary fix" the problem making a symbolic link to /var on the plugins folder, but I know that this is wrong and dirty. I don't know where to start debugging this. So any help is welcome. Additional information: /etc/wordpress/htaccess # Multisites generated htaccess RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Apache2 Configuration File: <VirtualHost *:80> Alias /wp-content /var/lib/wordpress/wp-content DocumentRoot /usr/share/wordpress ServerAdmin [email protected] <Directory /usr/share/wordpress> Options FollowSymLinks AllowOverride Limit Options FileInfo DirectoryIndex index.php Order allow,deny Allow from all </Directory> <Directory /var/lib/wordpress/wp-content> Options FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost> Thanks in advance,

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • run as dialog always pops up

    - by user12006
    I recently got some malware on a machine that I don't use for much (partly intentional). I've cleaned it, but now everytime I open any .exe the 'Run As' dialog pops up asking me which user I want to use to run the program. What causes this, and what's the fix for it? edit My process to remove the malware was as such: Disconnected from the network Deleted DisableTaskMgr reg key Inspected with Process Explorer and Task Manager and noticed that all applications were being run within another executable located in Documents and Settings...\Temp\Some.exe The system tray application was also in Documents and Settings...\Temp\SomeOther.exe I suspected that a service was in place as the system tray application would restart if it was killed, but couldn't find any service that I didn't recognize. Removed permissions from Some.exe and SomeOther.exe (on those files only) Restarted and deleted Some.exe and SomeOther.exe Deleted startup entries that were created Ran AVG Free and Windows Defender to remove anything else (they would be killed immediately before the two .exe's were removed) Cleaned registry via CCleaner note that system restores would finish saying something to the effect of 'couldn't restore system: there were no changes made'. I attempted to restore to a week ago, and I only got the malware yesterday.

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

< Previous Page | 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631  | Next Page >