Search Results

Search found 19966 results on 799 pages for 'wild thing'.

Page 557/799 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Software to automate website screenshot capture

    - by Leniel Macaferi
    Do you know any software that can automate the process of getting screenshots of every page of a website? It would act like a spider/crawler/robot. You name it... For example: I developed a website and now I'd like to get a screenshot of every page of the site. I of course could do it manually (a lot of work). For each module of the site (Student, Payment, etc) I have different pages (Create, Edit, Details, Delete, etc) forms. The thing I'm looking for is a software that can visit every link of the site and then capture the screen - a software that can automate the whole process. It would also be good if the software allowed the user to pass a list of URLs to capture screenshots allowing even more fine grained configuration. EDIT: I tried Selenium mentioned by Aaron in his answer but I managed to find an app that does exactly what I needed. It's called Paparazzi!. I wrote a blog post to showcase my attempt at Selenium and the findings regarding Paparazzi!'s batch capture functionality: Software to automate website screenshot capture

    Read the article

  • Laptop hangs on POST and does not finish except on rare occasions

    - by user1049697
    I have an old Toshiba Satellite A100 laptop that hangs on POST when I try to start it. On rare occasions it does finish the POST and boots Windows successfully, but most times it just finishes it partially and continues to hang. I can enter the BIOS though when it has frozen, but I have to open the DVD-drive first for some reason. The keyboard is not quite right either, and I can't navigate the BIOS properly because the arrow keys doesn't work. I tried an external keyboard, but the problem persisted. I have tried to remove the memory, hard drive, and battery to see if any of these were the problem, but it did not solve it. The one logical thing left to do would be to remove the CMOS battery, but the "brilliant" engineers at Toshiba have place it such that a complete disassembly of the machine is necessary. What this all boils down to is basically the question of whether I can "save" this machine and get it to boot properly, or if I should just send it off to recycling. I suspect it might need costly repairs, but I can't bring myself to throw it away before I have made sure it's completely dead.

    Read the article

  • KVM and JBoss Java Application Server

    - by Jason
    We have a large Xen deployment running on both RHEL and CentOS and have recently started looking at KVM since this is where it looks like the future of VM's are on Linux. We can load the server and get everything running without an issue. However when we load up a new one with JBoss (4.2 Community edition, Sun JDK 6) and load a large EAR the server goes a little crazy. The %sy will jump to 80-99% and just hang for large periods of time we see a similar jump in %us on the host machine. We though at first this might be I/O as it seems to happen at start of JBoss but then would "cool down" after everything got loaded. We did some tests by extracting some large tar.gz files and using jar -xvf on the ear but could not re-create. Then we starting thinking this might be some type of memory access issues. We loaded a c-program that would generate a lot of memory activity and sure enough we saw the spikes again. Not as high mind you but we did see it jump. We then wrote a small java program to do the same thing and sure enough we saw it jump again. Any thoughts on what might be causing this? Is this just the way KVM works? As a side note we do NOT see this behavior on any other setup. Xen, VMWare and/or native iron. The system does seem a bit slower than our Xen / VMware ones.

    Read the article

  • Write Fedora.iso to USB and boot it from a Macbook

    - by MTilsted
    I have an .iso image of the full Fedora 16 install (Downloaded from http://fedoraproject.org/en/get-fedora-options#formats as "Fedora 16 DVD") and the question now is: How do I write it on a USB stick, so I can install it on my Mac book? I tried using DD as the install guide said, and that gave me a USB stick which can boot from my PC. But it can't boot from the Mac (The Mac start menu don't show it as a boot option). Edit: I downloaded a live install image, and did this (SSD is my USB 4GB thing) /sbin/mkdosfs -F 32 -n usbdisk /dev/dev/sdd1 sudo livecd-iso-to-disk --format --reset-mbr --efi /tmp/download/Fedora-16-i686-Live-KDE.iso /dev/sdd1 And this produced an image which can boot on my pc but not on my mac. This seems to indicate that the --efi is not working, because if it really was EFI it would not boot on a normal pc, would it? I then tried this: (Difference being that I write the image directory to /dev/sdd instead of /dev/sdd1) but this still will not boot on the Mac (it newer shows up at the startup screen on the Mac). sudo livecd-iso-to-disk --format --reset-mbr --efi /tmp/download/Fedora- PS: My host Linux is Fedora 13.

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Allow incoming connections on Windows Server 2008 R2

    - by Richard-MX
    Good day people. First, im new to Windows Server. I've always used Linux/Apache combo, but, my client has and AWS EC2 Windows Server 2008 R2 instance and he wants everything in there. Im working with IIS and PHP enabled as Fast-CGI and everything is working, but, i cant see the websites stored in it from internet. The public DNS that AWS gave us for that instance is: http://ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com/ But, if i copy paste that address, i get nothing, no IIS logo or something like that. My common sense tells me that maybe the firewall could be blocking the access. Can anyone help me and tell where to enable some rules to get this thing working? I don't wanna start enabling rules at random and make the system insecure. If you need any additional info, you can ask me and i will provide it. Thanks in advance. UPDATE: Amazon EC2 display this: Public DNS: ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com Private DNS: ip-XX-XXX-XX-252.us-west-2.compute.internal Private IPs: XX.XXX.XX.25 In my test microinstance, i just to use the Public DNS address (the one that starts with "ec2") and it works like a charm (of course, the micro instance have its own Public DNS im not assuming same address for both instances...) However, for the large instance, i tried to do the same. Set up everything as in the micro instance but if i use the Public DNS, it doesnt load anything. Im suspicious about the Windows Firewall, but, the HTTP related stuff is enabled. What should i do to get access to the large instance? I don't want to set up the domain yet, i want access from an amazon url. 2ND EDIT: all fixed. Charles pointed that maybe Security Groups was not properly set up for the instance. He was right. Just added HTTP service to the rules and all works good.

    Read the article

  • Magnetic Stripe Reader over Terminal Server has random Uppercase/Lowercase nonsense

    - by Peter Turner
    The Magnetic Stripe Reader that I'm using and testing is just supposed to be sending keystrokes. Unfortunately, it seems to randomly be sending upper case and lower case keystrokes, sometimes substituting % for 5 and ^ for 6 and vice versa. (If you've ever programmed for a magnetic strip reader, you know that's not a good thing.) Is there something in the RDP protocol that causes this? I've got kind of a convoluted system, running XP inside virtualbox on Fedora 11 RDP'ed into a win2k3 server. It works on the XP VM and it doesn't work on the RDP'ed one. What's weirder, is that I'm not even emulating the USB drivers for my Mag Card Reader. Linux is sending keystrokes straight in to windows, and MSTSC on windows XP is sending crap to the Win2k3 server. I'm 99% certain this isn't a problem with the card reader, it has nothing to do with my programming either. (I get the same junk coming into notepad that I get coming into our software [that's why I didn't ask on SO]). And, it works with rdesktop programs other than MSTSC.exe! Needless to say, I'm in need of some RDP Guruship.

    Read the article

  • Windows Media Center showing Jerky Video on PC

    - by Kris Erickson
    I had to repave my Windows 7 x64 box last week due to a hard drive crash, and for a while everything was running perfectly but now all videos in Windows Media Center are jerky (the sound is fine, they just seem to skip a ton of frames all the time). This is on the local machine, but the same thing happens when I try to stream to my Xbox. The videos all show fine in VLC and Windows Media Player. I guess I must have installed something recently (in the process of getting all the apps I usually have running on my PC) that caused this but for the life of me I can't figure it out. I have updated to the latest video driver (and then rolled back to the standard Windows 7 driver), I have rolled back all the other drivers that I have installed (I believe). I have uninstalled all the codec packs (I also run TVersity, so I hate the TVersity codec pack installed), and I uninstalled TVersity. Nothing seems to help. I have uninstalled windows media center, and reinstalled it from the Programs and Features. I have basically ran out of things to try to fix this, and am almost thinking about reinstalling Windows again. Any suggestions?

    Read the article

  • Uninstall php5 installed from source

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • RAID administration in Debian Lenny

    - by Siim K
    I've got an old box that I don't want to scrap yet because it's got a nice working 5-disk RAID assembly. I want to create 2 arrays: RAID 1 with 2 disks and RAID 5 with the other 3 disks. The RAID card is Intel SRCU31L. I can create the RAID 1 volume in the console that you access with Ctrl+C at startup. But it only allows for creation of one volume so I can't do anything with the 3 remaining disks. I installed Debian Lenny on the RAID 1 volume and it worked out nicely. What utilites could I now use to create/manage the RAID volumes in Debian Linux? I installed the raidutils package but get an error when trying to fetch a list: #raidutil -L controller or #raidutil -L physical # raidutil -L controller osdOpenEngine : 11/08/110-18:16:08 Fatal error, no active controller device files found. Engine connect failed: Open What could I try to get this thing working? Can you suggest any other tools? Command #lspci -vv gives me this about the controller: 00:06.1 I2O: Intel Corporation Integrated RAID (rev 02) (prog-if 01) Subsystem: Intel Corporation Device 0001 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - <MAbort- >SERR- <PERR- INTx- Latency: 64, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: Memory at f9800000 (32-bit, prefetchable) [size=8M] [virtual] Expansion ROM at 30020000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: PCI_I2O Kernel modules: i2o_core

    Read the article

  • Permission denied problem in Freenas + Transmission

    - by Torbjörn Karlsson
    Running Freenas 0.7.2 (5543) and Transmission 2.11 The problem it that i can not save a torrent where ever i want.. For example... I can save in: /nmt/1-500gb/Tv/dexter but i can not save in /nmt/4-1000gb/tv/Lost When i try to save in the lost folder I get a permission denied error in the Web interface. But when I try to save the same torrent file in the dexter folder everything works fine... This is probably an easy thing to fix, but I'm new to Freenas. The user name for Transmission is TorrentUser if that helps. Now I find out that I can not browse the disk in Quixplorer.. I can browse nmt/4-1000gb/ but not /nmt/1-500gb When I try to browse the nmt/4-1000gb/ I get Unable to read directory $ mount /dev/md0 on / (ufs, local) devfs on /dev (devfs, local) procfs on /proc (procfs, local) /dev/fuse1 on /mnt/5 - 500gb (fusefs, local, synchronous) /dev/fuse2 on /mnt/2 - 1000gb (fusefs, local, synchronous) /dev/fuse3 on /mnt/3 - 1000gb (fusefs, local, synchronous) /dev/fuse4 on /mnt/4 - 1000gb (fusefs, local, synchronous) /dev/fuse5 on /mnt/320GB - USB (fusefs, local, synchronous) /dev/md1 on /var (ufs, local) /dev/da0a on /cf (ufs, local, read-only) /dev/fuse0 on /mnt/1 - 500gb (fusefs, local, synchronous) Dont work : 1 - 500gb 2 - 1000gb 3 - 1000gb Works: 320GB - USB 4 - 1000gb 5 - 500gb And this 3 disk is the same disks that I can save my torrents to. Ps. Every disk works perfect when i use ftp...

    Read the article

  • Hudson deploy specific git revision

    - by brad
    I'm using hudson to auto-deploy my Rails app to heroku. In my main build job I pull from a Git repo (hosted using gitosis on the same machine), master branch with the following: URL of repository: /home/git/repositories/my_app.git Name of repository: origin Refspec: +refs/heads/master:refs/remotes/origin/master Branches to build: master Then, assuming all tests pass, I want to kick off a new build that is the deploy to Heroku. I can't however figure out how to get that deploy build to checkout the particular revision that this build was using. I understand there's a parameterized trigger plugin that would allow me to pass this revision number, but I don't know how I can tell hudson to checkout this particular revision on the deploy build. I'm pretty sure this just has to do with my limited knowledge of git, but where in the hudson git config's is there an option to checkout a particular revision? Otherwise, I could have many commits happen whilst a build is happening, and when it kicks off a deploy build, that deploy build would just check out the HEAD of the branch, which may not be the same as the code that was pushed that triggered this build. I don't fully understand why I have a refspec in Hudson, then also specify a branch to build, I thought this was the same thing. Can refspec somehow specify the revision number? How would this be referenced if it was passed through with the parameterized trigger plugin? (I've never used that plugin, but someone else recommended it as a way to pass in vars to a new build, if there's another way I'm all ears)

    Read the article

  • Why does Windows make random "device connect" and "device disconnect" sounds?

    - by Steve Elmer
    Hello, I've been noticing this since Windows Vista. I see it on Windows 7, now, as well. In any case throughout the day I notice that my computer makes apparently random device-connect and/or device-disconnect ("boink") sounds. I suppose it is the same sound you hear when connecting or disconnecting a USB device such as a thumb drive. I've noticed that this happens on each of three computers I work with at home, my wife's computer, and my machine at work. It happens without any user action at all - i.e. I'll be just sitting there (hands off my mouse and keyboard), and the computer will make the sound. There is no visual queue or anything. Just the sound. I have sometimes gone in pursuit of the sound - running virus scans, examining event logs and such, and observing task manager - but have never had any luck tracking this thing down, but have not had any luck. Surely someone else out there must be experiencing this, too. Any ideas? Thanks, Steve

    Read the article

  • Using Varnish (only) for DDoS mitigation

    - by Martin Kanters
    My VPS is suffering from a (D)DoS doing a SYN flood with spoofed IPs. I'm right now searching from ways how to be able to defend (at least a bit) against it. It's running a DirectAdmin apache2 webserver. Mainly used for serving PHP and MySQL. We are using CloudFlare, which are saying that they are able to mitigate (D)DoS at some level, now the attacker knows our real IP address, so CloudFlare isn't helping a bit. I've done some searching on the net and found out about enabling SYN cookies, to defend against it. I've checked my settings and it seems it was enabled all along. I've also read about that Varnish is able to defend against SYN flooding and Slowloris attacks, now I'm pretty interested in using that. The thing is that CloudFlare is already caching a lot from us, and I don't wish to spend too much resources on Varnish. Is it possible and smart to set up Varnish only for the better handling of requests? Are there perhaps better ways which I've missed? Thanks in advance, Martin

    Read the article

  • Turn off monitor (energy saving) while in text console mode (in Linux)

    - by Denilson Sá
    How to configure Linux text console to automatically turn of the monitor after some time? And by "text console" I mean that thing that you get on ctrl+alt+F[1-6], which is what you get whenever X11 is not running. And, no, I'm not using any framebuffer console (it's a plain, good and old 80x25 text-mode). Many years ago, I was using Slackware Linux, and it used to boot up in text-mode. Then you would manually run startx after the login. Anyway, the main login "screen" was the plain text-mode console, and I remember that the monitor used to turn off (energy saving mode, indicated by a blinking LED) after some time. Now I'm using Gentoo, and I have a similar setup. The machine boots up in text-mode, and only rarely I need to run startx. I say this because this is mostly my personal Linux server, and there is no need to keep X11 running all the time. (which means: I don't want to use GDM/KDM or any other graphical login screen) But now, in this Gentoo text-mode console, the screen goes black after a while, but the monitor does not enter any energy-saving mode (the LED is always lit). Yes, I've waited long enough to verify this. Thus, my question is: how can I configure my current system to behave like the old one? In other words, how to make the text console trigger energy-saving mode of the monitor? (maybe I should (cross-)post this question to http://unix.stackexchange.com/ )

    Read the article

  • Why does Windows make random "device connect" and "device disconnect" sounds?

    - by Steve Elmer
    Hello, I've been noticing this since Windows Vista. I see it on Windows 7, now, as well. In any case throughout the day I notice that my computer makes apparently random device-connect and/or device-disconnect ("boink") sounds. I suppose it is the same sound you hear when connecting or disconnecting a USB device such as a thumb drive. I've noticed that this happens on each of three computers I work with at home, my wife's computer, and my machine at work. It happens without any user action at all - i.e. I'll be just sitting there (hands off my mouse and keyboard), and the computer will make the sound. There is no visual queue or anything. Just the sound. I have sometimes gone in pursuit of the sound - running virus scans, examining event logs and such, and observing task manager - but have never had any luck tracking this thing down, but have not had any luck. Surely someone else out there must be experiencing this, too. Any ideas? Thanks, Steve

    Read the article

  • Scheduling Automatic Backups for Virtual Private Web Server running CENTOS 6.3 and WHM

    - by Oliver Farrell
    I'm pretty new to administering my own VPS - but thus far am finding it quite a compelling experience. There's something quite refreshing about having complete control over everything it does. One thing that I would like to look at is a suitable backup solution (a few times a day). My current setup is as follows: I'm running a CENTOS 6.3 VPS with a single 25GB hard drive solely for the purpose of hosting websites. I'm using WHM & cPanel for administering them. I now plan on adding an additional hard disk and hooking it up to my VPS. What I'm not sure about is how I get the two disks talking and get the backup process going. I'm not a seasons SSH-er so don't really know where to start. I'm hosting with Serverlove (one of the best hosting providers I've used) and am provided with a number of unique identifiers for each hard disk so I imagine these may play a part in linking them together. I appreciate that this is a little vague (I'm clutching at straws) but any assistance is very much appreciated.

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Damaged partition after disk image

    - by Charles Gargent
    I am trying to clone/backup a disk with Windows 7 Pro 64bit on it. First I tried Easus Todo Backup and used disk clone option without sector by sector copy. I then plugged in the new drive and I get the following error. "Invalid or damaged Bootable partition" I then plugged the old drive back in and I am greeted with the same error. My next step was to try the sector by sector disk clone, but still I get the same error. I have tried fixing the mbr with the windows disk but that makes no difference. I have tried some other free tools and I get the same error. I have tried this on a different machine running Windows 7 Enterprise 32bit without this problem. I have done some searching and the only thing I can come up with is this post from the Acronis forums http://forum.acronis.com/forum/8254 suggesting that the bios is reading my disk geometry incorrectly. Can anyone shed any light on this, is there a way I can fix this either in the bios or repair the mbr every time I reimage it?

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • How can i use the `eject` command on a computer i have SSH'd into?

    - by will
    So if i do eject on my machine, it works exactly as expected, however, if i ssh into the machine next to me, and do the same thing, it does not work... my computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: trying to eject `/dev/sr0' using CD-ROM eject command eject: CD-ROM eject command succeeded other computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: unable to open `/dev/sr0' if i look in the /dev/ dir, then i find cdrom which is a symlink to sr0 - as mentioned by the verbose outputs of eject -v. On my machine, if i try and look at it, if the drive is open, it will close it, and then give this: $ less sr0 sr0 is not a regular file (use -f to see it) so $ less -f sr0 sr0: No medium found but if i do it on the other computer, $ less -f sr0 sr0: Permission denied so i look at the files more, and get this on both machines: $ ls -la sr0 brw-rw----+ 1 root cdrom 11, 0 Nov 12 10:13 sr0 Does anyone know a way around this? I do not have root access.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Disappearing Arial on 2 Macs

    - by drewk
    I noticed that Safari started rendering common web pages in a funny manner on two different Macs that I have. One is a Macbook Pro and the other a Mac Pro desktop. Yahoo and Google would appear all excessively bold or all italic and not at all look right or acceptable. The computers are all running OS X 10.6.3 "Snow Leopard" Turns out that "Arial.TTF" and "Arial Bold.ttf" got deleted somehow on these two computers. I restored Arial through Font Book and got my web mojo back. So questions: 1) Anyone seen "arial" strangely randomly disappear? The only thing in common is these are the only two computers out of eight on site that recently got Adobe CS 5 installed. Has anyone had CS 5 delete arial? 2) When I restored arial with font book, it goes into User fonts rather than the All fonts. Can I use Font Book to restore a font in /System/Library/Fonts or do I need to do that manually? 3) I located THIS article on the web regarding OS X fonts. Essentially, Snow Leopard did away with the older .dfont format and replaced with open True Type. There is a minimum font list, but Arial is not among them. Arial is installed by MS Office. 4) Why are web sites affected by Arial being missing anyway? If I look at the HTML source for Yahoo for example, "arial" is specified by name only in an ad. Yahoo itself does not specify a font name. In my Safari preferences, I have Times and Courier specified as the default font which is the default for Safari when installed. How does a missing Arial screw things up anyway? Thanks in advance.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >