Search Results

Search found 18803 results on 753 pages for 'link to'.

Page 600/753 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • My Ruby on Rails application only works if the address contains the port

    - by True Soft
    I have a Ruby on Rails application that works ok on my notebook ( http://localhost:3000/ ) I uploaded it on my hosting server, created with CPanel X an application, the URL is http://example.com:12007/ created a rewrite from http://example.com/ to http://example.com:12007/, and started it. If I write in my browser http://example.com:12007/ or http://www.example.com:12007/ all the pages work as expected. But if I write http://example.com/ or http://www.example.com/ the first page is displayed, but without any css or images (just like it wouldn't find them). I can see all the text (even the text from my MySQL database), but with no format. And if I click on any link, I get a error page like this: Not Found The requested URL /some_controller was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. What should I do to make my website work without writing the port in the address bar? The content of my /public_html/.htaccess file is RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com$ [OR] RewriteCond %{HTTP_HOST} ^www.example.com$ RewriteRule ^/?$ "http\:\/\/127\.0\.0\.1\:12007%{REQUEST_URI}" [P,QSA,L] which I guess was generated by CPanel Rewrites.

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Internet Sharing on Lion breaks my routing table

    - by seaders
    When in the office, I'm connected to a 192.168.1.0/24 network. When Internet Sharing is off, when I run netstat -nr the first entry shows default 192.168.1.254 UGSc 10 62 en0 If I turn Internet sharing on, it shows default link#5 UCS 2 0 en1 This is obviously incorrect and breaks all connectivity of my machine. en1 is my wireless, whereas en0 is my ethernet. If I then disable Internet Sharing, it even deletes that incorrect route, so I'm left with no default route at all. Currently I have one script that I run when I share, or after, when I disable that does route delete default route add default 192.168.1.254 That fixes everything, but I'd love to know what's actually making this happen and how to properly fix it. And just to say that at some point a few months ago, this was working absolutely perfectly, with no hitches, then one day when I brought the laptop home, I couldn't disable the internet sharing, so I couldn't connect to my home wifi. I eventually had to restart the machine and since then this problem has been happening. Any help would be greatly appreciated, thank you.

    Read the article

  • Errors generating gpg keypair

    - by Evan Lynch
    I posted this originally on stackoverflow, but was told that it was offtopic and this would be the better place to post it, so I am reposting it here and deleting my original topic. I have a rather old PGP key, but I've long ago lost the private key for it, so I'm trying to generate a new key with GPG on Windows 7. While it technically generates the key, GPA crashes every time I generate the keypair. I've tried this four times now and just downloaded what appears to be the latest version of Gpg4Win and am still receiving this problem. A comment on my original post informed me that GPA crashes is not a very good description of the problem, but unfortunately I can't do much better than that: all it tells me is "gpa.exe has crashed and will close now", I don't get an error dump or anything. Is there anything I can do to fix this, or is this just a bug in the latest version of Gpg4Win? Here are the specs of GPG that I'm using: GPA 0.9.4. GnuPG 2.0.22. My operating system is Windows 7 64 Bit, and I have 5 GB of RAM. Also, I was told to try generating the keypair on the command line but can't find any documentation for how to do this in Windows 7. If anyone could link me to current documentation for this, that would be a good workaround for solving this problem.

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Are flash drives and hard drives thought of as "an ocean of bytes"?

    - by Jian Lin
    Why can a USB Flash drive be formatted as NTFS or FAT32? Is the USB Flash Drive and Hard Drive just to be thought of as "an ocean of bytes"? I get very used to hearing formatting a hard drive as FAT32 or NTFS, but we can also format a USB Flash drive as NTFS or FAT32? Is it because a hard drive or Flash drive both can be thought of as "an ocean of bits" or "an ocean of bytes"? I remember RAM as: it takes 16 bit or 32 bit as an address signal (the 16 or 32 copper footing on the circuit board), and give out 8 bit of data (the other 8 copper footing on the circuit board). So can a hard drive be thought of as working that way too? So that's why a Flash drive can be the same too? Just an "ocean of bytes". But is it true that hard drive's hardware make it an ocean of sector or something else, that is, the smaller unit of read / write is not byte but something else? So with this "ocean of bytes", NTFS has the format that says, "if the first byte is __, then it means __ (it is a file or folder, and link to which sector, indicated by byte 2 and 3, etc, etc)"

    Read the article

  • Linux server: Dropped packets

    - by Lars
    I see dropped packets using ifconfig on my eth0 interface: eth0 Link encap:Ethernet HWaddr 00:15:17:0d:03:ca inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:30268348 errors:0 dropped:70721 overruns:0 frame:0 TX packets:133076885 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8699434077 (8.6 GB) TX bytes:194937313025 (194.9 GB) Interrupt:16 Memory:feae0000-feb00000 When i use ethtool -S i dont see anything wrong: NIC statistics: rx_packets: 30267138 tx_packets: 133074510 rx_bytes: 8699356158 tx_bytes: 194934147340 rx_broadcast: 35296 tx_broadcast: 5435 rx_multicast: 0 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 0 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 5757001 tx_tcp_seg_failed: 0 rx_flow_control_xon: 8649 rx_flow_control_xoff: 62072 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 8699356158 rx_csum_offload_good: 30212111 rx_csum_offload_errors: 0 rx_header_split: 10857552 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 0 rx_dma_failed: 0 tx_dma_failed: 0 I am running Ubuntu 12.04 with kernel 3.2.0-30-generic #48-Ubuntu SMP I have pinged every device on my internal network for about 24 hours, without packet loss. Also checked my router and my interface to the WAN, also no errors there. Does anyone have any clue?

    Read the article

  • Why do I have to log into hotmail twice?

    - by Tony Lee
    I just recently noticed I have to attempt a login into hotmail twice before it succeeds. Although I'm using Google Chrome (3.0.195.21), the symptoms are well described in a Mozilla thread. In short, I'm told: The e-mail address or password is incorrect. Please try again. The thread on mozilla's site that supposed to describe the latest details (and the 1st hit on google when I search for "hotmail login twice") requires an account to read so I'm hoping someone here has a good synopsis of what the cause is. I normally start at hotmail.com, which redirects to login.live.com/.... I can login by starting at mail.live.com, using IE8 or attempting a 2nd login. Oddly, if I start at login.live.com Chrome tells me there is a redirect loop. Does anyone know or have a public link to the root cause of the double login? (it is a hotmail account I'm login into) EDIT - Caused by my 'restricting how 3rd parties can use cookies'. If I allow all cookies, it works first time.

    Read the article

  • Is there a limit setting a php_admin_value in php-fpm?

    - by PeeHaa
    I am trying to set a large value in the configuration of a pool in php-fpm, but at some point it just doesn't start anymore. php_admin_value[disable_functions] = dl,exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,pcntl_exec,include,include_once,require,require_once,posix_mkfifo,posix_getlogin,posix_ttyname,getenv,get_current_use,proc_get_status,get_cfg_va,disk_free_space,disk_total_space,diskfreespace,getcwd,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_set,mail,proc_nice,proc_terminate,proc_close,pfsockopen,fsockopen,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,fopen,tmpfile,bzopen,gzopen,chgrp,chmod,chown,copy,file_put_contents,lchgrp,lchown,link,mkdi,move_uploaded_file,rename,rmdi,symlink,tempnam,touch,unlink,iptcembed,ftp_get,ftp_nb_get,file_exists,file_get_contents,file,fileatime,filectime,filegroup,fileinode,filemtime,fileowne,fileperms,filesize,filetype,glob,is_di,is_executable,is_file,is_link,is_readable,is_uploaded_file,is_writable,is_writeable,linkinfo,lstat,parse_ini_file,pathinfo,readfile,readlink,realpath,stat,gzfile,create_function When trying to restart php-fpm it fails with the following message: Stopping php-fpm: [ OK ] Starting php-fpm: [20-Oct-2013 22:31:52] ERROR: [/etc/php-fpm.d/codepad.conf:235] value is NULL for a ZEND_INI_PARSER_ENTRY [20-Oct-2013 22:31:52] ERROR: Unable to include /etc/php-fpm.d/codepad.conf from /etc/php-fpm.conf at line 235 [20-Oct-2013 22:31:52] ERROR: failed to load configuration file '/etc/php-fpm.conf' [20-Oct-2013 22:31:52] ERROR: FPM initialization failed [FAILED] When I remove the last disabled function (create_function) it start again. I also tried with other functions, but this gives the same error so it's not related to the create_function function. The string currently is just over 1KB in size so it looks like I have hit a limit here? Is my assumption correct? Is there a way to overcome this limit? I also tried to add another php_admin_value[disable_functions] underneath it (hoping it would be appended), but that didn't work (it just used the first one).

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Connect to Nonencrypted Wireless Network Using Ubuntu Commands

    - by Tim
    I failed to connect to an open i.e. nonencrypted wireless network using Ubuntu command lines. Here is what I did: $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo /sbin/ifconfig wlan0 up $ sudo iwconfig wlan0 essid "Cavalier High-Speed 866-4-CAVTEL" $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 10812 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 21 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. Trying recorded lease 192.168.1.67 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms Trying recorded lease 192.168.1.45 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms No working leases in persistent database - sleeping. $ sudo /sbin/iwconfig wlan0 wlan0 IEEE 802.11bg Mode:Managed Frequency:2.422 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I was wondering what the problem is and how I can do it right? Thanks and regards!

    Read the article

  • Making "default saved" work with GRUB2...?

    - by baltusaj
    I just installed Moblin Operating System. It's using GRUB2. On my Ubuntu 8.04 GRUB 0.97 was being used in which i was using the default saved option comfortably. I found that with GRUB2 i should not edit /boot/grub/menu.lst directly but I did :) because my Moblin does not contain any /etc/default/grub where they say I should do the modification I want. So what I did is as following which did not work: default=saved timeout=1 #splashimage=(hd0,0)/boot/grub/splash.xpm.gz #hiddenmenu #silent title Moblin (2.6.31.5-10.1.moblin2-netbook) root (hd0,0) kernel /boot/vmlinuz-2.6.31.5-10.1.moblin2-netbook ro root=/dev/sda1 vga=current savedefault=1 title Pathetic Windows rootnoverify (hd0,1) chainloader +1 savedefault=0 By doing so I should have automatically switch between Moblin and Window at each boot but it's not working. Almost all the troubleshooters on internet are saying that I should enable the DEFAULT=save option in /etc/default/grub but I am unable to find this file. Any idea what else should I do? Thanks a lot Update: I used the equal to sign because by default my menu.lst had an entry as default=0. However, default 0, is also working fine. Moreover the menu.lst, i have is actually a symbolic link to ./grub.conf. I have also noticed that grub-intall and grub-set-default commands are not working.

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • Windows Phone sync error when syncing with iTunes on different Hard Drive

    - by njallam
    I have my iTunes library file on a separate hard drive (which I believe may be the cause of the problem) and I have been trying to use it to synchronize with my Windows Phone. I would like to first note that if I set up my phone to synchronize with 'Windows Libraries', then it works fine. This is however not ideal as I have categorised my music and made playlists etc, on iTunes. When I first link my Windows Phone to the Windows Phone App (for desktop) and select iTunes from the above selection, I get the following error message: After searching that error, I found the following forum threads: Fix for error 8300300B when trying to sync Lumia 920 Windows 8 Phone in PC? Error code 8300300B on Windows Phone 8 while trying to sync I've tried the workarounds described in the above threads, however, they did not work for me. If I ignore that error message, I see the expected interface, along with all of my iTunes library's media, however the 'Sync' button is greyed out. I have tried some other things to try and fix this: Removing the app's AppData folder Uninstalling, reinstalling Using the full-screen modern app (does not allow for iTunes syncing)

    Read the article

  • SQLSTATE[HY000]: General error: 2006 MySQL server has gone away

    - by Barkat Ullah
    Server details: RAM: 16GB HDD: 1000GB OS: Linux 2.6.32-220.7.1.el6.x86_64 Processor: 6 Core Please see the link below for my # top preview: I can often see the error mentioned in title in my plesk panel and my /etc/my.cnf configuration are as below: bind-address=127.0.0.1 local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql max_connections=20000 max_user_connections=20000 key_buffer_size=512M join_buffer_size=4M read_buffer_size=4M read_rnd_buffer_size=512M sort_buffer_size=8M wait_timeout=300 interactive_timeout=300 connect_timeout=300 tmp_table_size=8M thread_concurrency=12 concurrent_insert=2 query_cache_limit=64M query_cache_size=128M query_cache_type=2 transaction_alloc_block_size=8192 max_allowed_packet=512M [mysqldump] quick max_allowed_packet=512M [myisamchk] key_buffer_size=128M sort_buffer_size=128M read_buffer_size=32M write_buffer_size=32M [mysqlhotcopy] interactive-timeout [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid open_files_limit=8192 As my server httpd conf is set to /etc/httpd/conf.d/swtune.conf and the configuration is as below: at prefork.c: <IfModule prefork.c> StartServers 8 MinSpareServers 10 MaxSpareServers 20 ServerLimit 1536 MaxClients 1536 MaxRequestsPerChild 4000 </IfModule> If I run grep -i maxclient /var/log/httpd/error_log then I can see everyday this error: [root@u16170254 ~]# grep -i maxclient /var/log/httpd/error_log [Sun Apr 15 07:26:03 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting [Mon Apr 16 06:09:22 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting I tried to explain everything that I changed to keep my server okay, but maximum time my server is down. Please help me which parameter can I change to keep my server okay and my sites can load fast. It is taking too much time to load my sites.

    Read the article

  • Migrate openldap users and groups

    - by user53864
    I have an OpenLDAP server running on one of my ubuntu 8.10 servers. I used command-line only for OpenLdap installation and some basic configurations, everything else I'll configure with the Webmin gui tool. I'm trying to migrate to ubuntu 10.04 and I was able to migrate all other servies, application and databases but not the ldap. I'm an ldap beginner: I have installed OpenLDAP server and client on ubuntu 10.04 server using the link and used the following command to export and import ldap users and groups To export from 8.10 server slapcat > ldap.ldif To import to 10.04 server Stop ldap and slapadd -l ldap.ldif and Start ldap Then I accessed Webmin and checked in Ldap users and groups and I could see all the users and groups of my old ldap server.Whenever I create an ldap user from the webmin(in 8.10 or 10.04) a unix user is also created with the home directory under /home. But the imported users in 10.04 from 8.10 are not present as a unix user(/etc/passwd). How could I make the ldap users available as a unix user, is there any perfect way to export and import?. I also wanted to check the ldap users from the terminal that if password is exported properly but I don't know how to access the ldap users which are not available as unix users. On 8.10, I just use su - ldapuser and it is not working in the 10.04 as unix users are not created for the exported ldap users. If every thing works fine then the CVS works as it is using ldap authentication. Anybody could help me?

    Read the article

  • How do I access my samba drive through several layers of network topology?

    - by stephenmm
    I have a new windows 7 Home Premium machine that is in a different room than my main computer area. As such I have to use a bridge and another router. Everything is working wonderfully except I cannot access the SAMBA drive with the new machine. I know that SAMBA is accessible as an older WinXP machine can access it. A picture of my network would probably be helpfull: To ISP | | +---------------------------+ | WAN | | Cable Modem | | (2WIRE678) | | | | | +---------------------------+ | +---------------------------+ | | (|) (|) +-----------+ | Belkin Router | | | | Wireless | | (F5D) |--+ +--| WinXP | | | |SAMBA USER | | | +-----------+ +---------------------------+ | | | | +------------+ | | Ubuntu | | | Apache + | | |SAMBA Server| | +------------+ | | +---------------------------+ | | | Netgear Bridge | | (XET1001) | | | +---------------------------+ # # +---------------------------+ | | | Netgear Bridge | | (XET1001) | | | +---------------------------+ | +---------------------------+ | | | D-Link Router | | (DI-524) | | | | | +---------------------------+ | | | | +-----------+ | | | Win7 | |SAMBA USER?| +-----------+ More interesting data points: 1. I can ping the SAMBA server from the Win7 machine locally (Ie. 192.168.2.2) 2. I can access the webserver from the Win7 machine locally (Ie. 192.168.2.2) 3. I followed the advice to get Win7 and SAMBA to play nice: http://www.tannerwilliamson.com/2009/09/windows-7-seven-network-file-sharing-fix-samba-smb/ Sorry for being so long winded but it is kind of complex and I am really at a loss as to how to fix it. If any of you have some suggestions I would love to hear it!

    Read the article

  • Running KVM/XEN/Hyper-V VMs from a RAM disk, is this possible? Practical?

    - by Ausmith1
    Currently I'm using ESX (v3 and v4) to test a scripted OS (Windows 2003) and application install DVD. The DVD ISO (8GB) is mounted on a 1Gbps NFS datastore and the VMDK's (20GB) are on an SSD mounted via NFS over a 10Gbps link. It still takes a lot longer than I'd really like for to run through a test iteration and I'm wondering if mounting the virtual disks and ISO on a RAM disk on the same server as the hypervisor is running on would be worth my while. I can dedicate a server to this VM and 32GB of RAM in the system should be adequate to do the trick I'd guess. (1GB hypervisor OS, 28GB RAM disk and 2GB for the VM is < the 32GB available to me) Since hosting a RAM disk within ESX does not seem possible I'm open to trying KVM/Xen/Hyper-V. KVM would probably be my first choice of these three. Anyone out there tried this? Bear in mind this is purely for a test run of the installer, the VM will be discarded as soon as the test is completed so I'm not worried about losing data from the remote possibility of a power failure.

    Read the article

  • Windows 7 Automatically Connecting To Unsecured Wireless Networks On Startup

    - by Xtend
    Most of the questions on this topic related to folks connecting to somebody else's wireless network when their own was available and could remedy the situation by going to their connections and unchecking the "connect automatically" box. See this: " Avoid automatically connecting to wireless network on windows 7 " as an example. In my situation, I've noticed that Win 7 will automatically connect to any unsecured wifi network - even if I have never connected to it in the past. If I am traveling and boot Win 7, it will start and connect to what appears to be the best signaled unsecured network without prompting me for confirmation (note: in the above link, "Naveen" seems to have same problem). Obviously, that is a security concern to me. Further, when I open "Network and Sharing" and "Manage wireless networks" the network is not displayed (probably because I labelled it a public network). Again, these are new, never connected with before, wireless networks. I always promptly disconnect from them but don't want to have to be on constant guard for an auto connection to a malicious network. This began about a month ago, as I recall, Win 7 did not behave like this in the past, I didn't monkey with wifi settings, and don't use a 3rd party connection manager. I did have to download some internet security certificates for army website access but I don't think that should mess with network settings. Any ideas how I can tell Win7 cease automatically connecting to networks or, at least, to prompt me for a confirmation before connecting?

    Read the article

  • Virus on site but can't find where

    - by Rob
    WARNING! THIS IS ABOUT A VIRUS ON MY SITE. IT APPEARS IT HAS BEEN THERE FOR SOMETIME AND I'VE HAD NO PROBLEMS. BUT PLEASE BE CAREFUL. READ EVERYTHING I SAY AND SEE IF YOU CAN HELP ME WITHOUT VISITING THE LINK. AVG PICKS UP ON IT AND BLOCKS IT, MCAFEE DOES NOT. Sorry about the warning, obviously i'm not here to get anyone infected or anything like that. Basically I run the website sortitoutsi dot net. Ages ago I got a virus on my computer, they got hold of my FTP passwords and added some lines of javascript to the top of my site. I removed them and believe it was fixed. However i'm using the "Web Developer" extension for Firefox and chose to view all javascript on my page and find there are various links to horrible urls such as: gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/aboutus.org/aboutus.org/google.com/skycn.com/torrents.ru.php and gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/index.php?jl= These terms do not appear anywhere. In the source code, in any of the javascript or the css. I also can't see that there are any rogue images that I don't recognise either. So i've no idea where this javascript is coming from. Can anyone suggest how I can find references to these links and remove them? I can see them both in the Web Developer firefox extension and in the net tab using Firebug. Any help would be greatly appreciated

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >