Search Results

Search found 18096 results on 724 pages for 'let'.

Page 504/724 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Do all routers really must know all routes to every router?

    - by Philipili
    This is my complicated and long question. First let's talk about the context. Network topology: PC A --- RT A --- RT C --- RT B --- PC B (RT C has a WAN NIC connected to "the cloud") With this situation : PC A must send a packet to PC B Default routes direct packets to the cloud We haven't access to RT C's configuration RT C only knows how to join network A, not network B RT A knows about network B RT B knows about network A RT C's routing table: Destination NIC Gateway 0.0.0.0 WAN Cloud Network A LAN A RT A's WAN RT A's routing table: Destination NIC Gateway 0.0.0.0 WAN LAN A Network B WAN LAN A RT B's routing table: Destination NIC Gateway 0.0.0.0 WAN LAN B Network A WAN LAN B I would like to permit PC A and PC B to communicate, but I don't have access to RT C. Networks B and BC are new. Can PC A send a packet to RT B's WAN NIC (which is possible) and "ask RT B to direct the packet to PC B" ? I believe replacing RT B with a VPN server should do the trick, but I would like to know if it is possible to make it without establishing a new connection.

    Read the article

  • What are useful .screenrc settings?

    - by gyaresu
    Basically like some of my own that I've posted below. I'm looking for added functionality to the programme 'screen'. At the very least have a look at the last line for a fantastic 'menu bar' at the bottom of a screen session. ## gyaresu's .screenrc 2008-03-25 # http://delicious.com/search?p=screenrc # Don't display the copyright page startup_message off # tab-completion flash in heading bar vbell off # keep scrollback n lines defscrollback 1000 # Doesn't fix scrollback problem on xterm because if you scroll back # all you see is the other terminals history. # termcapinfo xterm|xterms|xs|rxvt ti@:te@ # These will let you use bind -c selectHighs 0 select 10 #these three commands are bind -c selectHighs 1 select 11 #added to the command-class bind -c selectHighs 2 select 12 #selectHighs bind -c selectHighs 3 select 13 bind -c selectHighs 4 select 14 bind -c selectHighs 5 select 15 bind - command -c selectHighs #bind the hyphen to #command-class selectHighs screen -t rtorrent 0 rtorrent #screen -t tunes 1 ncmpc --host=192.168.1.4 --port=6600 #was for connecting to MPD music server. screen -t stuff 1 screen -t irssi 2 irssi screen -t dancing 4 screen -t python 5 python screen -t giantfriend 6 these_are_ssh_to_server_scripts.sh screen -t computerrescue 7 these_are_ssh_to_server_scripts.sh screen -t BMon 8 bmon -p eth0 screen -t htop 9 htop screen -t hellanzb 10 hellanzb screen -t watching 3 #screen -t interactive.fiction 8 #screen -t hellahella 8 paster serve --daemon /home/gyaresu/downloads/hellahella/hella.ini shelltitle "$ |bash" # THIS IS THE PRETTY BIT #change the hardstatus settings to give an window list at the bottom of the ##screen, with the time and date and with the current window highlighted hardstatus alwayslastline #hardstatus string '%{= mK}%-Lw%{= KW}%50>%n%f* %t%{= mK}%+Lw%< %{= kG}%-=%D %d %M %Y %c:%s%{-}' hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %d/%m %{W}%c %{g}]'

    Read the article

  • printing parallel over ethernet cable

    - by Crudler
    I have a bit of an interesting challenge :) I have a machine with a parallel printer output, i want it to be able to print instead to a printer in a different room and i know that parallel isnt great over big distances. i found this: http://www.amazon.com/over-Cat5-Extension-Cable-Adapter/dp/B002WJ9S6Y%3FSubscriptionId%3DAKIAINHICTCYYZGJWT4Q%26tag%3Dusbprintercables.net-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB002WJ9S6Y which will let me connect over cat5, but its usb to cat 5. my machine can only output on parallel (its not a computer) so what i was thinking of getting is a parallel(f) to usb and usb to parallel (M) for either side i.e. machine - parallel - usb - cat5 - usb - parallel -printer just seems a bit messy :) suggestions? another thing i would like to try is to get rid of the old school parallel printer and instead use a network based multi function. would this be possible? i.e. machine - parallel -usb - cat5 - ethernet print server - network printer this might be rougher because the machine cannot "know" that we are using a network printer. it can ONLY print to LPT1 Thanks!

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • CentOS - mdadm raid1 drive won't mount to default location

    - by danny
    I'm running CentOS 5.5, the system, boot, swap, etc. is all on /dev/sda and I have two identical single-partition drives /dev/sdb1 /dev/sdc1 that are configured in RAID1 (using mdadm). It was working fine (configured to mount to /mnt/data in the fstab file) and I recently let yum install a couple of automatic updates without paying attention to what they were, and now it doesn't work. Raid is working fine (dmesg shows it gets loaded correctly). mdstat shows: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] XXXX blocks [2/2] [UU] unused devices: <none> Additionally, I can mount it anywhere other than its default directory (i.e. the following works, and I can read data off the drives). # mount /dev/md0 /mnt/data2 EXT3-fs warning: mounting fs with errors, running e2fsck is recommended But when I run the following I get: # mount -a mount: /dev/sdb1 already mounted or /mnt/data busy It says nothing is mounted when I try to umount /dev/sdb1 or umount /mnt/data, so I assume it's the second of those errors. However, lsof | grep mnt shows nothing. The weird thing is that I can save files in /mnt/data. So something is obviously mounted there, but when I try to umount it I get the error that nothing is mounted. /etc/mtab doesn't mention any of the partitions or files I am trying to work with, and fstab just has that one line I mentioned above that is supposed to mount my raid partition. Again, it was all working fine until I On Google I've found a few things about dmraid interfering with mdadm after an update, but I yum remove'd dmraid and rebooted and it didn't help. I'm really confused and need to get this working to get on with my work!

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Application to automate Windows software installation in a test lab

    - by Marc
    I have several test environments (hyper-V) which contain a variety of windows servers. Each machine needs periodically rolling back to a given snapshot and then re-installing with the latest version of our software to test. The software installs are quite complex MSI's with a fair few option screens. I know that the installs can be driven from the command line, passing in parameters to override the wizard options. At the simplest level I suppose I could just write a batch file to kick off each install with the required parameters, however the values that are passed in do need to change from time to time (and environment to environment) so a tool with a config file and simple GUI seems like a better idea. I think what makes it slightly more painful is the multiple environments. For example one environment might contain 4 servers and need a config file with all the server names, service endpoints etc. Another environment might be a 1-box install with all names and endpoints set to localhost. So, ideally I want to be able to store different setup configurations and use them to run all the required installers with the relevant settings against the relevant machines. Before I go off to write the thing, does anyone know of an existing, simple, free tool that will let me achieve this?

    Read the article

  • Does AMD Cool n Quiet Slow Down Your System?

    - by Software Monkey
    I discovered today that having AMD Cool n Quiet enabled in my BIOS appears to be slowing down my Windows XP SP2 system by about 29% on memory & CPU intensive workloads. I was wondering if (a) anyone else had encountered this, (b) anyone can offer an explanation, (c) there are any negatives I need to be aware of if I keep AMD CnQ disabled. With some superficial testing so far, I don't immediately notice any difference with CnQ off (other than the performance being what I expected from this new hardware). It seems to ramp up the CPU fan a little bit as my program maxes out 1 core, but that's the same as with CnQ on. And when I let the system idle the CPU fan slows down and the systems as quiet as a mouse (after years of 6 small fans churning like they want to go into orbit it's nice to again have a system where I can hear the HDDs seeking). Bonus question: Does CnQ cause issues with system stability? I ask because the reason I disabled it was because I have had a few freezes and 1 spontaneous reboot with my new hardware.

    Read the article

  • ISA Server dropping packets as it believes they are spoofed

    - by RB
    We have ISA Server 2004 running on Windows Server 2003 SP2. It has 2 NICs - one internal called LAN on 192.168.16.2, with a subnet of 255.255.255.0, and one external called WAN on 93.x.x.2. The default gateway is 93.x.x.1 (our modem). This machine also accepts VPN connections. We are having a problem with a scanner, which is trying to save a scan into a network share. Every time we try to scan, ISA Server logs the following Denied Connection Log type: Firewall service Status: A packet was dropped because ISA Server determined that the source IP address is spoofed. Rule: Source: Internal ( 192.168.16.54:1024) Destination: Internal ( 192.168.16.255:137) Protocol: NetBios Name Service Pinging 192.168.16.54 from the ISA Server works fine. In ISA Server, going into Configuration → Networks, there are 5 Networks : - External (inbuilt) - Internal (defined as 192.168.16.0 → 192.168.16.255) - Local Host (inbuilt) - Quarantined VPN Clients (inbuilt) - VPN Clients (inbuilt) Finally, under Network Connections → Advanced → Advanced Settings..., the connections are in the following order : - LAN - WAN - [Remote Access Connections] If we try to scan onto a workstation it works fine. Please let me know if you need any more info - many thanks. RB.

    Read the article

  • unable to destroy windows 2008 r2 failover cluster after SAN rebuild

    - by Zack
    I created a windows 2008 r2 failover cluster for a sql 2008 active/passive cluster. This two node cluster was using a SAN device for a quorum disk resource as well as MSDTC resource. Well....I decided to reconfigure the SAN device, but I didn't destroy the cluster first. Now that the quorum disk and mstdc disk are completely gone, the cluster is obviously not working. But, I can't even destroy the cluster and start again. I've tried from the Windows Clustering tool, as well as the command line. I was able to get the cluster service to start using the "/fixquorum" parameter. After doing this I was able to remove the passive node from the cluster, but it wouldn't let me destroy the cluster because the default resource group and msdtc are still attached as resources. I tried to delete these resources from both the GUI tool, as well as command line. It will either freeze for several minutes and crash the program, or once it even BSOD'd the server. Can someone advise on how to destroy this cluster so I can start over?

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • Macs don't connect to wifi access point but PCs will

    - by Josh
    So, as a side project I'm going to try and figure out why the wifi APs in my building exhibit the following behavior: - They typically allow all types of computers to connect without issues - Sometimes Apples can't get an IP address but will still connect to the AP's signal - Less often, PCs can't connect to the wifi (same as above - yes signal, no IP addy) - Don't let Raiders fans on no matter the time of day! My first thought was that the DHCP leases were all taken up when the Apples would try to connect, and it was just their unlucky timing, but I would then try to log on with a PC that had a new, unleased MAC address and it would work... Could this be something to do with interoperability between an apple wifi card, and the APs? Different parts of the DHCP lease being taken up first? The fact that the Seattle Mariners might actually be good this year?? If this hasn't used up everyone's patience (with my crappy sports jokes), something else I could use some help with: - We don't have the model or type of AP - This is because there is no documentation available for them, and they literally look like small white boxes with no writing on them. Also, the company that installed them is out of business, so the situation might be that no docs will ever be on the way. -- Do you guys have any ideas on how to figure out what we have? Thanks as always for all the help, and I'm looking forward to the day when I know enough to start contributing back to the site, Josh

    Read the article

  • IIS7 binding to subdomain causing authentication errors

    - by Tommy Jakobsen
    I'm trying to bind a IIS web site to a subdomain, which is causing authentication errors. First I'll explain what I've done to set it up. This is the fist time I do this, so please correct me if I'm wrong. The web server is a stand-alone Windows Server 2008 R2 x64, running IIS7 with .NET Framework 4. I have the following A-records, pointing to my server: server.mydomain.com *.server.mydomain.com So all subdomains of server.mydomain.com points to the server. In IIS7 I have a web site on port 8080, with a virtual directory (named virtual) that is using Windows Authentication. I have one binding on the web site pointing to all unassigned IP addresses, port 8080 and having a host name of sub.server.mydomain.com. Now, shouldn't I be able to access the virtual directory through: http://sub.server.mydomain.com/virtual That is not working. However, I can access it through: http://sub.server.mydomain.com:8080/virtual But, it won't let me authenticate using a Windows account (Server\Username). A windows account that I can authenticate with, when accessing the site through http://localhost:8080/virtual. What am I missing here?

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • Repository bugzilla package changed to bugzilla3 in Lenny; upgradable?

    - by Pukku
    This question was asked in debianhelp.org almost half a year ago, but never got an answer. I wasn't the one who posted it, however I was today facing exactly the same question. Not sure if copying it to here as such is considered as inappropriate or something, but there's not really anything that I would even like to paraphrase... So let's just go. (I'm sure you will be happy to close it, if this is not the way to go :) Hello all! We are using a Bugzilla server install on a Debian 4/Etch server and are starting to look at the upgrade to Debian 5/Lenny. I was hoping to upgrade the existing Bugzilla server and database from the oldstable (v2.22) to the newer stable in Lenny (v3) when we get to doing a dist-upgrade. However from testing in a virtual machine it seems that the old package was called "Bugzilla" whereas the Lenny package is called "Bugzilla3" and I could not figure a way to directly upgrade between the two. Is it possible to establish some kind of upgrade path quickly after the dist-upgrade to minimise downtime using apt-get or aptitude? Going on past experiences I would not want to do a fresh install with the Bugzilla3 package and attempt to inject the old database into it (previous attempts failed miserably!) :(

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • How to uninstall files and software from a Solid State Drive (SSD)

    - by jasondavis
    I am using a SSD drive just as a main Operation system and software drive. I have a regular spinning disk for data files. I have read that files are not REALLY deleted from a SSD drive when you "delete" them. So here is my situation and hopefully I can get some good advice. Let's say I have Windows 7 Pro x64bit installed on my 80gb Intel SSD. I then have Adobe Photoshop CS4 installed along with 50 other programs installed onto this SSD. I then decide I am done with Adobe Phjotoshop CS4 and want to remove it. I then decide I want to install another version of Adobe Photoshop and a few other software titles. Would I just do the usual, add/remove software from the windows control panel. Or if the software being removed has an "Uninstall" program to run, then run it and uninstall the software? I realize that all this WILL remove the software from Windows 7 but I am wanting to know if there is additional steps that should be taken since it is a solid state drive (SSD)?

    Read the article

  • I need scanning software to use with my scanner.

    - by D Connors
    So, I got this (HP F4280) printer/scanner about a year ago, and I'm really happy with it. The only thing is that I never really liked the HP scanning software that came with it. A few months ago I reformatted and reinstalled windows 7. Then, once I plugged in the printer, I noticed that windows recognized it automatically, and offered to install all the drivers by itself. So instead of manually installing the driver that came in the CD, I simply let windows automatically install it from its servers, and so far it's great. Instead of HP's scanning software (that really wasn't pleasing me), I got a very simplified interface that is more than enough for my ocasional scanning habits. Until today. Today I had to scan a bunch of old pictures for my father. And that simple interface felt like it was lacking quite a few features to make this repetitive task a little easier. And that's why I'm now looking for a good software to use for scanning. By "good" I mean anything well thought out, and specially anything that will make my life easier when repetitive-scanning. It doesn't need to have professional tweaking options, but having them is not a problem either. You guys got anything?

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • Using Windows Explorer, how to find file names starting with a dot (period), in 7 or Vista?

    - by Chris W. Rea
    I've got a MacBook laptop in the house, and when Mac OS X copies files over the network, it often brings along hidden "dot-files" with it. For instance, if I copy "SomeUtility.zip", there will also be copied a hidden ".SomeUtility.zip" file. I consider these OS X dot-files as useless turds of data as far as the rest of my network is concerned, and don't want to leave them on my Windows file server. Let's assume these dot-files will continue to happen. i.e. Think of the issue of getting OS X to stop creating those files, in the first place, to be another question altogether. Rather: How can I use Windows Explorer to find files that begin with a dot / period? I'd like to periodically search my file server and blow them away. I tried searching for files matching ".*" but that yielded – and not unexpectedly – all files and folders. Is there a way to enter more specific search criteria when searching in Windows Explorer? I'm referring to the search box that appears in the upper-right corner of an Explorer window. Please tell me there is a way to escape my query to do what I want? (Failing that, I know I can map a drive letter and drop into a cygwin prompt and use the UNIX 'find' command, but I'd prefer a shiny easy way.)

    Read the article

  • Distorted Sound

    - by BCable
    I have my laptop hooked up to my receiver for sound output. I hear a hissing/crackling background sound that is really loud and hard to just ignore (but possible). When my 360 is connected, the sound comes out perfect, so it's just with this laptop. Previously, I thought it was just my laptop and just submissively just let it slide. I just bought a brand new laptop though and it's doing the same thing. I have found out more information now that I know it's not my laptop. I have used this laptop in similar environments where it worked just fine (different speakers). I have bought a new cable to connect to my receiver and it did nothing (headphone jack to RCA). I tried different ports on the receiver (Video 1-3) and it always happens. I have discovered that the sound goes away if I unplug my laptop (so it's running on battery). Because of the last one, I tried plugging my laptop into a different outlet across the room and it's STILL doing it. Doesn't matter if I boot to Linux or Windows, yet my phone (Android G1) doesn't cause this sound using the exact same cable. Any ideas? I'm out of them! Thanks!

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >