Search Results

Search found 17003 results on 681 pages for 'multi index'.

Page 618/681 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • freebsd-update reports an upgraded jail as not upgraded

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • order of operations for environment variables

    - by alyda
    I want to understand how environment variables are set and reset (overridden). I'm running Apache/2.2.24 (Unix) PHP/5.4.14 on a mac . My theory is this: Environment vars can be set in bash, then they can be overwritten with httpd.conf preceding a VirtualHost directive that precedes php.ini, which can then be overwritten by .htaccess (if allowable) and finally by PHP I tried the following: setting environment variable in bash: I added export ENVIRONMENT='local' to my ~/.bashrc file, restarted apache and did not get any output from print_r($_ENV); (in a simple index.php file at the root of my webserver). I also tried putting ENVIRONMENT='local' into /etc/environment, and restarting apache, nothing, as well as /etc/bashrc, restart apache. still nothing. setting environment variable in httpd.conf: I added SetEnv ENVIRONMENT 'local-httpd to the end of my /etc/apache2/httpd.conf file (but before I load other conf files, such as virtual host [Include /private/etc/apache2/other/*.conf]). I now see the variable in the array print_r($_SERVER); but not print_r($_ENV);. setting environment variable in httpd-vhosts.conf: I added SetEnv ENVIRONMENT 'local-vhost to my /etc/apache2/extra/httpd-vhosts.conf file in my generic directive that points to my default document root. I now see the variable has been overwritten (to local-vhost from local-httpd, so I know where the variable is getting set). setting environment variable in php.ini: while searching for a proper place to put my environment variable, I noticed that variables_order = "GPCS" was set to the production value rather than EGPCS. I changed it, restarted my server and found that I was now getting output for print_r($_ENV); but not my expected custom variable. It also appears that I am not able to set a custom variable in this file. Please tell me if I am wrong setting environment variable in .htaccess: I added SetEnv ENVIRONMENT 'local-htaccess'. This worked as expected, overwriting all other values that were set. setting / overwriting environment variable in PHP: if (...) { putenv('ENVIRONMENT=local'); } I'm asking this question because I have a lot of local and remote testing servers, some of which may or may not allow me access to modify httpd, httpd-vhost, php.ini or environment variables. I want to understand what is best for those difference scenarios (shared hosting, heroku, local servers, etc) I obviously don't know how to properly set the environment variable in bash in a way that php can use it, I'd like to know how to do that (as I think Heroku does something similar with heroku config set...)

    Read the article

  • Magento - Users unable to login from corporate networks with Bluecoat / F5 Load balancers

    - by user1330440
    Hoping someone has come across this issue before with Magento and corporate clients. We have two clients for our Magento site who both have their internal networks setup using bluecoat security devices and F5 load balancers. Some users within these networks are unable to login to Magento - Magento eventually is sending a 302 redirect to /index.php/ when users attempt to log in. Through our testing, the problem appears to be isolated to this setup - we can log into the accounts in question from anywhere outside of these networks without issue, and if the client tries to access the site without going through the F5 load balancer, they are able to log in successfully. Strangely enough, the issue only started occurring for the two sites the day after we introduced a system upgrade which added a new site to the Magento installation. The system upgrade should not have affected any standard login functionality, and as said, the problem does not appear to be with the users in question, but with where the users are accessing the site from. Initially we thought the issue might have something to do with communications between the client's networks and the network which the server was hosted on, so we've tried moving the server to different hosts, but this has not helped. I'm currently waiting for more info from the clients on exact devices / models used in their network setup. I will update this post if more information becomes avaliable. Magento version is enterprise edition of ver. 1.9.0.0 Does anyone know of any tucked away Magento settings that might be able to cause this kind of behavior? Experience with this kind of set-up and ideas for things to look at? All help and ideas for things to follow-up would be appreciated - as this is a current production issue for a large number of users. I will respond asap with any requests for additional information on the topic, but currently am not able to disclose any identifying information on the project in question, and/or the clients experiencing issues. Thanks in advance for any assistance offered :) Note: This question has also been posted on the Magento forums: http://www.magentocommerce.com/boards/viewthread/277917/ And also on Stack overflow (Moved here as a commenter thought this site may be better suited): http://stackoverflow.com/questions/10133978/magento-users-unable-to-login-from-corporate-networks-with-bluecoat-f5-load

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • xen debian: domU can't get out side

    - by iftol
    hi every body. i'm a trainee as a sysAdmin, it is my first expérience with virtualization. i have a server setup debian xen 3 with 2 physical interfaces. eth 0 for local network 10.0.0.1 and eth1 for internet (194.X.X.4). i created 3 VMs (web server, mail server and dabase server) with local ip addresses 172.10.0.x/24. the problem i had first is that domU can't ping dom0. i asked the sysAdmin of our ISP and he sais that he fogot to setup the bridginb. so he ceated a bridge with 172.10.0.1/24 after that i was able to ping the real server (194.X.X.4). but i can't go out side from my VMs, how can i fixe this issue? real or physical server ifconfig eth0 Link encap:Ethernet HWaddr 23:26:34:84:ce:xe inet adr:10.1.3.12 Bcast:10.1.3.255 Masque:255.255.255.0 adr inet6: fe80::226:b9ff:fe84:ceb4/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:412006 errors:0 dropped:0 overruns:0 frame:0 TX packets:411296 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:31410957 (29.9 MiB) TX bytes:31178370 (29.7 MiB) Interruption:36 Mémoire:d6000000-d6012100 eth1 Link encap:Ethernet HWaddr 23:26:34:84:ce:xe inet adr:194.x.x.4 Bcast:194.254.167.255 Masque:255.255.255.0 adr inet6: fe80::226:b9ff:fe84:ceb6/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:25872332 errors:0 dropped:0 overruns:0 frame:0 TX packets:414578 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:2642278343 (2.4 GiB) TX bytes:35436775 (33.7 MiB) lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1308073 errors:0 dropped:0 overruns:0 frame:0 TX packets:1308073 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:109871395 (104.7 MiB) TX bytes:109871395 (104.7 MiB) peth1 Link encap:Ethernet HWaddr 23:26:34:84:ce:xe UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:31818694 errors:0 dropped:0 overruns:0 frame:0 TX packets:414818 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:5197318822 (4.8 GiB) TX bytes:37904897 (36.1 MiB) Interruption:48 Mémoire:d8000000-d8012100 vif281.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff adr inet6: fe80::fcff:ffff:feff:ffff/64 Scope:Lien UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:207 errors:0 dropped:0 overruns:0 frame:0 TX packets:298 errors:0 dropped:2 overruns:0 carrier:0 collisions:0 lg file transmission:32 RX bytes:24629 (24.0 KiB) TX bytes:28404 (27.7 KiB) vif281.1 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff adr inet6: fe80::fcff:ffff:feff:ffff/64 Scope:Lien UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:45 errors:0 dropped:47063 overruns:0 carrier:0 collisions:0 lg file transmission:32 RX bytes:0 (0.0 B) TX bytes:4449 (4.3 KiB) vif282.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff adr inet6: fe80::fcff:ffff:feff:ffff/64 Scope:Lien UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:78 errors:0 dropped:0 overruns:0 frame:0 TX packets:13 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 lg file transmission:32 RX bytes:5041 (4.9 KiB) TX bytes:714 (714.0 B) xenbr0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet adr:172.10.0.1 Bcast:172.10.0.255 Masque:255.255.255.0 adr inet6: fe80::5c72:c6ff:fe49:7fe/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7180 errors:0 dropped:0 overruns:0 frame:0 TX packets:8615 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:756804 (739.0 KiB) TX bytes:791206 (772.6 KiB) brtcl show bridge name bridge id STP enabled interfaces eth1 8000.0026b984ceb6 no peth1 vif281.1 xenbr0 8000.feffffffffff no vif281.0 vif282.0 network-multi-bridge /etc/xen/scripts/network-virtual start vifnum="0" bridgeip="172.10.0.1/24" brnet="172.10.0.0/24" VM webserver eth0 Link encap:Ethernet HWaddr 00:16:3E:42:33:70 inet addr:172.10.0.2 Bcast:172.10.0.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe42:3370/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:27 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:126 (126.0 b) TX bytes:2036 (1.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Thank you for your help.

    Read the article

  • nginx, php-fpm, and multiple roots - how to properly try_files?

    - by Carson C.
    I have a server context which is rooted in a login application. The login application handles, well, logins, and then returns a redirect to "/app" on the same server if a login is successful. The application is rooted elsewhere, which is handled by the location block shown here: location ^~ /app { alias /usr/share/nginx/www/website.com/content/public; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; include fastcgi_params; } } This works just fine, however the $uri getting passed to PHP still contains /app, even though I am using alias rather than root. Because of this, the try_files directive fails to a 404 unless I link app -> ./ in /usr/share/nginx/www/website.com/content/public. It's obviously silly to have that link in there, and if that link ever gets lost, bam dead website without an obvious cause. The next thing I tried... Was to remove the try_files directive entirely. This allowed me to rm the app link in my /public folder, and PHP had no problem locating the file and executing it. I used that to dump my $_SERVER global from PHP, and found that "SCRIPT_FILENAME" => "/usr/share/nginx/www/website.com/content/public/index.php" when the browser URI is /app. This is exactly right. Based on my fastcgi_params below, this led me to beleive that try_files $request_filename =404; should work, but no dice. nginx still doesn't find the file, and returns 404. So for right now, it will only work without any try_files directive. PHP finds the file, whereas try_files could not. I understand this may be a PHP security risk. Can anyone indicate how to move forward? The nginx logs don't contain anything relating to the failed try_files attempt, as far as I can see. fastcgi_aparams fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https;

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Disk operations freeze Debian

    - by Grzenio
    Hi, I have just installed Debian testing on my new desktop and I am not very happy with performance - when I perform a disk intensive operation, e.g. upgrade packages in the system, everything seems to freeze, e.g. changing tabs in Iceweasel takes 3 seconds. I run the Debian on my 3 year old Thinkpad X60 ultra-portable, and I don't have these issues. (every single parameter of the laptop is much worse than the desktop). I am using the default packaged kernel and scripts. I run hdparm -t /dev/sda1 And I got around 96GB/s, which is expected. What else can I try to make it work better? EDIT: grzes:/home/ga# hdparm -i /dev/sda /dev/sda: Model=WDC WD15EARS-00Z5B1, FwRev=80.00A80, SerialNo=WD-WMAVU1362357 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=2930277168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode EDIT2: Even my wife said "on this new computer I can't do anything when I copy the photos from the camera and its much worse than on the old one". So it must be serious. EDIT3: Updated to 2.6.32, but still no improvement EDIT4: I forgot to mention that the new disk is ext4, the old was ext3. EDIT5: Still not solved. I have a P43 ASUS P5QL-E board. Lines from dmesg that seem relevant: [ 0.370850] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.370852] io scheduler noop registered [ 0.370853] io scheduler anticipatory registered [ 0.370854] io scheduler deadline registered [ 0.370876] io scheduler cfq registered (default) ... [ 0.908233] ata_piix 0000:00:1f.2: version 2.13 [ 0.908243] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.908246] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.908275] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.908316] scsi0 : ata_piix [ 0.908374] scsi1 : ata_piix [ 0.909180] ata1: SATA max UDMA/133 cmd 0xa000 ctl 0x9c00 bmdma 0x9480 irq 19 [ 0.909183] ata2: SATA max UDMA/133 cmd 0x9880 ctl 0x9800 bmdma 0x9488 irq 19 [ 0.909199] ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.909202] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.909228] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.909279] scsi2 : ata_piix [ 0.909326] scsi3 : ata_piix [ 0.910021] ata3: SATA max UDMA/133 cmd 0xb000 ctl 0xac00 bmdma 0xa480 irq 19 [ 0.910024] ata4: SATA max UDMA/133 cmd 0xa880 ctl 0xa800 bmdma 0xa488 irq 19 [ 0.915575] FDC 0 is a post-1991 82077 ... [ 1.716062] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.716074] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.724318] ata1.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133 [ 1.724322] ata1.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 1.740339] ata1.00: configured for UDMA/133 [ 1.740428] scsi 0:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5 [ 1.746788] scsi 6:0:0:0: CD-ROM ASUS DRW-1608P 1.17 PQ: 0 ANSI: 5 ... [ 1.925981] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB) [ 1.926005] sd 0:0:0:0: [sda] Write Protect is off [ 1.926007] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.926020] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.926092] sda:sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1.931106] Uniform CD-ROM driver Revision: 3.20 [ 1.931191] sr 6:0:0:0: Attached scsi CD-ROM sr0 ... [ 1.941936] sda1 sda2 sda3 sda4 < sda5 sda6 > [ 1.967691] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.970938] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.970959] sr 6:0:0:0: Attached scsi generic sg1 type 5 ... [ 2.500086] EXT4-fs (sda3): mounted filesystem with ordered data mode ... [ 7.150468] EXT4-fs (sda6): mounted filesystem with ordered data mode

    Read the article

  • Apache Simple Configuration Issue: Setting up per-user directory permission denied problem

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work? [Added after comments] Hi Andol, thank you for your suggestions. So, first off, you said that you assume that I have the userdir module enabled, but I am not sure what that means exactly. That could be part of the problem. I do have the Module loaded, using the LoadModule directive. I have this: LoadModule userdir_module modules/mod_userdir.so I also did a find on where mod_userdir is, and I found it located here: [huckphin@crhyner-workbox]/% find / . -name '*mod_userdir.so*' 2> /dev/null /usr/lib64/lighttpd/mod_userdir.so /usr/lib64/httpd/modules/mod_userdir.so Is there something else I need to enable? Also, my directory configuration was mentioned. I have uncommented the default configuration given. I have not looked into what all of the configurations mean, and this could probably explain the issue. Here is the Directory that I have for the user directories: <Directory "/home/*/public_html"> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory>

    Read the article

  • URL Rewriting Problem

    - by Eray
    I'm trying to rewrite my URL's subdomain. http://username.domain.com will show content of http://www.domain.com/user.php?u=username but URL stay as http://username.domain.com . (I mean url masking) (Username's can contain a-z 0-9 and hypens) Also if subdomain is www or api, don't redirect them I'm using this for my .htaccess RewriteEngine on RewriteCond %{REQUEST_URI} !^/index\.php$ RewriteCond %{HTTP_HOST} !^www\.domain\.com RewriteCond %{HTTP_HOST} ^([^.]+)\.domain\.com RewriteRule .* /user.php?u=%1 [L] (After @mfarver's advice) I'm trying this RewriteEngine on RewriteRule .* /user.php?u=%1 [L] but this time getting 500 Internal Server error: [Mon May 30 20:10:44 2011] [alert] [client 81.6.xx.xxx] /home/blablabla/public_html/.htaccess: AllowOverride not allowed here (from error log) My server's httpd.conf file's virtualhost settings <VirtualHost 109.73.70.169:80> <IfModule mod_userdir.c> UserDir disabled UserDir enabled USERNAME </IfModule> <IfModule concurrent_php.c> php4_admin_value open_basedir "/home/USERNAME/:/usr/lib/php:/usr/php4/lib/php:/usr/local/lib/php:/usr/local/php4/lib/php:/tmp" php5_admin_value open_basedir "/home/USERNAME/:/usr/lib/php:/usr/local/lib/php:/tmp" </IfModule> <IfModule !concurrent_php.c> <IfModule mod_php4.c> php_admin_value open_basedir "/home/USERNAME/:/usr/lib/php:/usr/php4/lib/php:/usr/local/lib/php:/usr/local/php4/lib/php:/tmp" </IfModule> <IfModule mod_php5.c> php_admin_value open_basedir "/home/USERNAME/:/usr/lib/php:/usr/local/lib/php:/tmp" </IfModule> <IfModule sapi_apache2.c> php_admin_value open_basedir "/home/USERNAME/:/usr/lib/php:/usr/php4/lib/php:/usr/local/lib/php:/usr/local/php4/lib/php:/tmp" </IfModule> </IfModule> ServerName DOMAIN.net ServerAlias *.DOMAIN.net <Directory "/home/USERNAME/public_html"> Options FollowSymLinks Allow from all AllowOverride All </Directory> DocumentRoot /home/USERNAME/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/DOMAIN.net combined CustomLog /usr/local/apache/domlogs/DOMAIN.net-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User USERNAME # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup USERNAME USERNAME </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup USERNAME USERNAME </IfModule> ScriptAlias /cgi-bin/ /home/USERNAME/public_html/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/USERNAME/DOMAIN.net/*.conf" </VirtualHost> Also *.DOMAIN.net added to my DNS ZONES as A record.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • Multiple Homed Windows 2008 Server / Windows 7 Client

    - by Daniel Scott
    I have a small Windows 2008 network, with some Windows 7 clients. The clients are both laptops with docking stations and I would like them to communicate with the Windows 2008 server (for filesharing) through the wired network whilst they're docked. Internet connectivity for all machines (clients and server) is via a Wireless LAN, so the wireless adapter in the Windows 7 clients stays active while they're docked. When the laptops are un-docked, it would be nice to still be able to contact the windows 2008 server for print sharing (and slower file sharing) - hence the server also being on the wireless LAN. The windows 2008 server is running Active Directory, DHCP and DNS. It controls DHCP leases on the wired network and holds the DNS records for "myserver.mycompany.local", which is what the filesharing clients connect to. Ideally I'd like the DNS records to return the wired IP first so that this is the address that the laptops will attempt initially - but there doesn't seem to be a way to do that? At present the server's IP on the wireless LAN comes out of an nslookup above the wired Lan IP. The multi-homing works perfectly - but in the wrong order! Switch on the wireless lan and ping myserver and it goes to the wireless IP. Disable the wireless on the client and do the same ping again and after a couple of seconds it starts pinging the wired address. Does anyone have any suggestions on how to make this work in a predictable order? - or even if it can work. Alternative 1? If it can't work, then would this work: Remove the wireless adapter from the server, put a wireless router/bridge on the wired network (set up to route to/from the wireless LAN's subnet), then configure the clients with two routes to the (now) single IP of the server with metrics favouring direct communication over the wired LAN first? Alternative 2? Should I instead single-home the laptops so all of their connectivity is via the wired-LAN while they're docked? (and route via the windows 2008 server - or a dedicated wireless bridge/router)? My concern here is that I'd like undocking to be seamless - and if the clients are in the middle of downloading something from the internet I wouldn't want whatever they're doing interupted as they switch IP addresses onto the Wireless network. Perhaps this isn't the case and I'm concerned over nothing? Any thoughts? :) UPDATE I seem to have cracked it (at least DNS entries come out in the order I hope for - and pinging the server with various combinations of wired, wireless and both interfaces enabled uses the IP I want) ... I set the binding order of the NICs on the Server (which is acting as Domain Controller, DHCP and DNS server) so that the Wired NIC is before the Wireless adapter. (Start -- type "Network Interfaces" -- Select "View Network Connections" -- Press Alt to show classic dropdown menus -- Advanced -- Advanced Settings) Now, an nslookup (from the client) of the server's hostname returns the Wired IP first, followed by the Wireless IP. The wired IP now seems to be used whenever it's contactable. Incidentally, the metrics on the wired and wireless routes (on the client) also favour the wired LAN (based on Windows' automatically assigned metrics) - but this was always the case, even when I was having trouble getting the wired IP to be "favoured". I'm not entirely sure if this is coincidence - or if a DNS server running on Windows, handing back IP addresses for itself does actually take the binding order of it's own network interfaces into account? It would be interesting to hear from someone who can confirm or deny that (or confirm that the binding order on the server plays a role for some other reason?)

    Read the article

  • Vagrant (Virtualbox) host-only multiple node networking issue

    - by Lorin Hochstein
    I'm trying to use a multi-VM vagrant environment as a testbed for deploying OpenStack, and I've run into a networking problem with trying to communicate from one VM, to a VM-inside-of-a-VM. I have two Vagrant nodes, a cloud controller node and a compute node. I'm using host-only networking. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.define :controller do |controller_config| controller_config.vm.network :hostonly, "192.168.206.130" # eth1 controller_config.vm.network :hostonly, "192.168.100.130" # eth2 controller_config.vm.host_name = "controller" end config.vm.define :compute1 do |compute1_config| compute1_config.vm.network :hostonly, "192.168.206.131" # eth1 compute1_config.vm.network :hostonly, "192.168.100.131" # eth2 compute1_config.vm.host_name = "compute1" compute1_config.vm.customize ["modifyvm", :id, "--memory", 1024] end end When I try to start up a (QEMU-based) VM, it boots successfully on compute1, and its virtual nic (vnet0) is connected via a bridge, br100: root@compute1:~# brctl show 100 bridge name bridge id STP enabled interfaces br100 8000.08002798c6ef no eth2 vnet0 When the QEMU VM makes a request to the DHCP server (dnsmasq) running on controller, I can see the request reaches the controller because of the output on the syslog on the controller: Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPDISCOVER(br100) fa:16:3e:07:98:11 Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:07:98:11 However, the DHCPOFFER never makes it back to the VM running on compute1. If I watch the requests using tcpdump on the vboxnet3 interface on my host machine that runs Vagrant (Mac OS X), I can see both the requests and the replies $ sudo tcpdump -i vboxnet3 -n port 67 or port 68 tcpdump: WARNING: vboxnet3: That device doesn't support promiscuous mode (BIOCPROMISC: Operation not supported on socket) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vboxnet3, link-type EN10MB (Ethernet), capture size 65535 bytes 22:51:20.694040 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.694057 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.696047 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:23.700845 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.700876 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.701591 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:26.705978 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.705995 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.706527 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 But, if I tcpdump on eth2 on compute, I only see the requests, not the replies: root@compute1:~# tcpdump -i eth2 -n port 67 or port 68 tcpdump: WARNING: eth2: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 02:51:20.240672 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:23.249758 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:26.258281 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 At this point, I'm stuck. I'm not sure why the DHCP replies aren't making it to the compute node. Perhaps it has something to do with the configuration of the VirtualBox virtual switch/router? Note that eth2 interfaces on both nodes have been set to promiscuous mode.

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • how does openvpn decide which interface to get IP addrs from

    - by bkrupa
    Using ubuntu 10.04 on both ends. We have a client and server machine on the SAME network attempting to make a vpn connection. We use the config files from here and made minimal changes. The server and client start and seem to connect without any trouble. The server looks like: Wed Feb 23 22:13:22 2011 MULTI: multi_create_instance called Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Re-using SSL/TLS context Wed Feb 23 22:13:22 2011 192.168.1.55:47166 LZO compression initialized Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Local Options hash (VER=V4): 'f7df56b8' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Expected Remote Options hash (VER=V4): 'd79ca330' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 TLS: Initial packet from 192.168.1.55:47166, sid=69112e42 5458135b *...* Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Feb 23 22:13:22 2011 192.168.1.55:47166 [client1] Peer Connection Initiated with 192.168.1.55:47166 On the client side the connection looks like: Wed Feb 23 22:20:07 2011 [server] Peer Connection Initiated with [AF_INET]192.168.1.41:1194 Wed Feb 23 22:20:10 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Feb 23 22:20:10 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 10.8.0.4,ping 10,ping-restart 120,ifconfig 10.8.0.50 255.255.255.0' ... Wed Feb 23 22:20:10 2011 /sbin/ifconfig tap0 10.8.0.50 netmask 255.255.255.0 mtu 1500 broadcast 10.8.0.255 Wed Feb 23 22:20:10 2011 Initialization Sequence Completed The openvpn server has been configured to assign ip addresses in the range 10.8.0.* and the client has been given 10.8.0.50. When I run the following nmap from the client: Starting Nmap 5.00 ( http://nmap.org ) at 2011-02-23 22:04 EST Host 10.8.0.50 is up (0.00047s latency). Nmap done: 256 IP addresses (1 host up) scanned in 30.34 seconds Host 192.168.1.1 is up (0.0025s latency). Host 192.168.1.18 is up (0.074s latency). Host 192.168.1.41 is up (0.0024s latency). Host 192.168.1.55 is up (0.00018s latency). Nmap done: 256 IP addresses (4 hosts up) scanned in 6.33 seconds If I run an nmap from the server on 10.8.0.* I get nothing. If the client has two interfaces (wireless and tap device) when you look for a certain ip address, how does it decide which interface to connect on? edit I am trying to set up a vpn so that I can connect to my home network from a remote network. It seems like openvpn is connecting but none of the computers on my home network appear as network machines even after the connection is "Established". Stripped versions of the client and server config files are posted below. Thanks for any help you can offer. server.conf port 1194 proto udp dev tap ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key # This file should be kept secret dh /etc/openvpn/easy-rsa/keys/dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 client.conf client dev tap dev-node tap0901 proto udp remote ********** 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client1.crt key client1.key comp-lzo verb 3 one other thing that might be helpful, I tried to connect using the openvpn gui for windows and the connection stalls out on "obtaining configuration" and the bar just scrolls forever.

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Microphones not working on Apple macbook Air 1,1 (Early 2008) under Linux

    - by jj_p
    I'm running Linux on an mba. I can't make the microphones (neither external nor internal) work. I test using alsamixer and arecord -d 5 test-mic.waw together with aplay test-mic.waw It seems there is a problem with kernel trying to decipher Apple (intentionally) corrupted 'bios', in particular the mic pins are wrongly assigned. As far as we are concerned here, is there any difference between using EFI and BIOS-compatibility mode? (see https://wiki.archlinux.org/index.php/MacBook where they claim to have everything working out of the box on mba1,1) A nice proposal would be to compile the latest Linux kernel and run hda-jack-retask to find the right configuration (in the case of Realtek codec, the missing things I'm supposed to check are either some vendor-specific COEF verbs, EAPD or GPIO setup.), and then come up with a kernel patch to address the issue. Since I'm not that familiar with this last part of the story, can anyone help me through this process? Some useful data: The output from alsa script run as root http://www.alsa-project.org/db/?f=adae8ebee1007043fe83414ac4972319e02255fa The command hda-jack-sense-test -a (with external mic in) Pin 0x14 (Internal Speaker): present = No Pin 0x15 (Green HP Out): present = Yes Pin 0x16 (Not connected): present = No Pin 0x17 (Not connected): present = No Pin 0x18 (Not connected): present = No Pin 0x19 (Not connected): present = No Pin 0x1a (Not connected): present = No Pin 0x1b (Not connected): present = No Pin 0x1c (Not connected): present = No Pin 0x1d (Not connected): present = No Pin 0x1e (Not connected): present = No Pin 0x1f (Not connected): present = No Most likely the chip is Realtek ALC885 (compare also ALC889A) http://guide-images.ifixit.net/igi/bBTSqaeK5JpQ1AWe.large , although at the moment alsa reads it as ALC889A Takashi Iwai's tutorial https://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio.txt Some people researched the original files from a running OS X installation on this same model (I think the relevant files are AppleHDA.kext/Contents/MacOS/AppleHDA AppleHDA.kext/Contents/PlugIns/AppleHDAHardwareConfigDriver.kext/Contents/Info.p????list AppleHDA.kext/Contents/Resources/layout12.xml.zlib AppleHDA.kext/Contents/Resources/Platforms.xml.zlib) http://www.insanelymac.com/forum/topic/220090-alc889a-pin-configuration/#entry1554954 Datasheet http://www.realtek.info/pdf/ALC885_1-1.pdf (from the same Realtek, one can also try to download Linux driver, but this is just taken from ALSA project, as stated in the readme file.) Compare with this Arch guy http://www.alsa-project.org/db/?f=3ca8243c0626844f0264a3faad0aa72018bc14f4 Here for the first time support to audio (except mics) for mba1,2 (which is morally the same as 1,1) is patched into the kernel http://www.alsa-project.org/pipermail/alsa-devel/2010-February/025511.html The same jack supposedly works both for HP and ext MIC, I think it's called TRRS, and it's the same as the one used e.g. for iphones This guy might have done a similar job, though to a more recent version and for sound globally, not just mics: http://blogs.aerys.in/jeanmarc-leroux/2013/09/15/fixing-2013-macbook-air-ubuntu-sound-issue/ (this is mirror to http://unix.stackexchange.com/questions/73044/microphones-not-working-on-apple-macbook-air-1-1-early-2008-under-linux )

    Read the article

  • Is the master database backup crucial for restoring MS SQL server in the event where you have to res

    - by Imagineer
    I have been advise by Commvault partner support to turn off the backup of the master database as the backup failed due to the log file being lock. The following is the advise given: "The message is caused by Commvault’s inability to backup the master database’s transaction log. If this is happening intermittently its possible that something is locking the transaction log, preventing SQL iData agent from accessing the log. Typically the master database is just a template and is not used by any applications (applications that do require the use of an SQL database create their own) so there should be no harm in preventing it from being backed up You can do this by nominating NOT to back it up in the primary copy for the SQL data agent" The following is the error that I get. sqlxx SQL Server/ SQLxx N/A/ System DBs 19856* (CWE) Transaction Log N/A 01/08/2010 19:00:16 (01/08/2010 19:00:18 ) 01/08/2010 19:03:15 (01/08/2010 19:03:14 ) 1.44 MB 0:01:11 0.071 2 0 1 ITD014L2 Failure Reason: • ERROR CODE [30:325]: Error encountered during backup. Error: [ERROR: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.] Job Options:Create new index, Start new media, Backup all subclients, Truncation Log, Follow mount points , Backup files protected by system file protection , Stop DHCP service when backing up system state data, Stop WINS service when backing up system state data Associated Events: • 79714 [backupxx/JobManager] [01/08/2010 19:03:15 ]: Backup job [19856] completed. Client [sqlxx], Agent Type [SQL Server], Subclient [System DBs], Backup Level [Transaction Log], Objects [2], Failed [1], Duration [00:02:59], Total Size [1.44 MB], Media or Mount Path Used [ITD014L2]. • 79712 [sqlxx/SQLiDA] [01/08/2010 19:01:53 ]: Error encountered during backup. Error: [ERROR: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.] • 79711 [sqlxx/SQLiDA] [01/08/2010 19:01:51 ]: Query Result [[Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.]. • 79707 [backupxx/JobManager] [01/08/2010 19:00:15 ]: New backup request received for Client [sqlxx], iDataAgent [SQL Server], Instance [SQLxx], Subclient [System DBs], Backup Level [Transaction Log]. Files failed to back up: • Backup Database[master] Failed Please advise, thank you.

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • Where to get laptop replacement parts?

    - by wai
    I am sure some of us have older laptops that still works but just one or two parts started not working very well. However, to send it back to the manufacturer for repair might not be worth the money (as some of them even charge consultation fees just to have it checked) as these old laptops are most probably well beyond warranty. Replacing these one or two parts could give your laptop a new lease of life. For example, I have a Dell Inspiron 640m which the fan recently failed. Everything else still works. I know enough to open up the laptop (after reading the service manual) and put it back together again, hence a replacement fan could save it. However, finding these parts turned out to be difficult. I guess I wouldn't be able to get it from Dell since I have been searching around the Dell forums and concluded if I were going to get Dell to at least talk to me, I would have to pay some money (since my warranty expired). I found this online:- http://www.parts-people.com/index.php?action=item&id=3725 However, as I don't live in the United States, shipping alone would cost me 3 times more than the part in question. In addition, if there's a problem, it would be difficult because I probably would have to mail the parts back (costing more money). One of my friend spilled water on her Macbook, she sent it back to Apple and they told her the cost of repairing it exceeds that of buying a new Macbook. They probably have to change everything except the LCD screen. I opened it up and found that most of the parts are still working, only the RAM had fried. Replaced it and it's working again. For some time before discovering only the RAM fried, I was toying with the idea of replacing the logic board myself but looks like I didn't have to. =) Now I know there's a site for Macbooks used replacement parts:- http://www.ifixit.com/ The question is, does anyone know where to get replacement parts at their own locality? It might benefit the community as a whole. Someone might know a local computer store that sells precisely just the things we needed (for now, I hope to find a shop that sells just the fan). If you own a laptop which you had an easy experience finding the replacement parts, do post that as well. PS: I wish laptop manufacturers have outlets where they sell these replacement parts (or at least let you place an order).

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >