Search Results

Search found 12676 results on 508 pages for 'virtual directories'.

Page 477/508 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • Reach self hosted server from LAN

    - by Freefri
    I have a self hosted server with Apache2 pointed with the domain example.com. I have also some virtual servers www.example.com, cloud.examle.com, etc. This server is in my LAN, and when I try to acces to my server within the lan throw www.examle.com y get my router's configuration page. From outside the LAN www.example.com and cloud.examle.com works properly. From inside the LAN 192.168.1.33 (server internal IP) shows the default webpage (www.examle.com), but I can not get cloud.examle.com I also have a LAN name server in 192.168.1.33 with bind9. I set up my gateway 192.168.1.1 with my LAN-NS as primary NS I solve this problem creating a new dns zone in the NS. This are my config files: ;ZONE-1 $ORIGIN . $TTL 86400 ; 1 day home.lan. IN SOA server.home.lan. hostmaster.home.lan. ( 2008080901 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) home.lan. IN NS server.home.lan. $ORIGIN home.lan. ; Set the address for localhost.home.lan localhost IN A 127.0.0.1 router IN A 192.168.1.1 server IN A 192.168.1.33 mypc IN A 192.168.1.132 ;ZONE-2 $ORIGIN . $TTL 86400 ; 1 day example.com. IN SOA www.example.com hostmaster.home.lan. ( 2008080902 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) example.com. IN NS 192.168.1.33 $ORIGIN examle.com. localhost IN A 127.0.0.1 www IN A 192.168.1.33 cloud IN A 192.168.1.33 My DNS and my names are working properly now My question are: What do you think about my solution? Can I change the A zone with CNAME to server.home.lan (this is the domain in the LAN to the server)? How can I set a default IP for all my whatever.example.com?

    Read the article

  • Windows 8 disk errors

    - by wrongusername
    So yesterday, I forcibly restarted my Windows 8 PC. VMWare Workstation was having some trouble with the guest Linux Mint OS. It wasn't responding for some time, so I tried suspending it September 28th or perhaps even before. It wouldn't suspend -- I forgot what the window looked like, but all options in the power menu were disabled (i.e. "Shutdown," "Power Off," and options like that were all disabled). I eventually killed the VMWare application through Task Manager, though I was too lazy to hunt down the running virtual machine itself, and decided to kill it by just shutting down my PC entirely. The PC wouldn't shut down for quite some time after the monitor went blank, so I did a cold reset by holding the power button. I then powered it on again and Windows briefly gave me some message like "Search for KERNEL_STACK_INPAGE_ERROR." Windows then started diagnosing some problems and gave me the message, "Repairing disk errors. This might take over an hour to complete." That was yesterday night, and I went to sleep without waiting for it to finish. This morning, it said that the repair failed, and that the log was at C:\windows\system32\LogFiles\srt\srtTrail.txt (as I remember it -- I don't have the exact path I wrote down right now). It gave me some other options to troubleshoot, such as resetting Windows (files and settings still intact, but programs not installed through the app store will be erased). That didn't work (no error message given, I was just told it didn't work). I tried rebooting in safe mode, the same diagnosis process begins, except that this time it doesn't bother with the automatic repairs again. So I tried using the command prompt to try to see if my files are at least still there. I was on the X drive, and I couldn't cd to the C drive. I couldn't find my folder under Users (of course?), and couldn't find the srt folder under LogFiles either. I am not sure what to try next. I have backed up everything, but to the cloud, so if absolutely necessary I can start off with a fresh copy of Windows and restore all my data, though it would be a hassle. Any thoughts on what might be wrong or what I can try? My computer was purchased just this June, so the hard drive should still be pretty new.

    Read the article

  • Triple monitor setup in linux

    - by Brendan Abel
    I'm hoping there are some xorg gurus out there. I'm trying to get a three monitor setup working in linux. I have 2 lcd monitors and a tv, all different resolutions. I'm using 2 video cards; a 9800 GTX and 7900Gt. I've seen a lot of different posts about people trying to make this work, and in every case, they either gave up, or Xinerama magically solved all their problems. Basically, my main problem is that I cannot get Xinerama to work. Every time I turn it on in the options, my machine gets stuck in a neverending boot cycle. If I disable Xinerama, I just have three Xorg screens, but I can't drag windows from one to the other. I can get the 2 lcds on Twinview, and the tv on a separate Xorg screen no problem. But I don't really like this solution. I'd rather have them all on separate screens and stitch them together with Xinerama. Has anyone done this? Here's my xorg.conf for reference. p.s. This took me all of 30 seconds to set up in Windows XP! p.s.s. I've seen somewhere that maybe randr can solve my problems? But I'm not quite sure how? Section "Monitor" Identifier "Main1" VendorName "Acer" ModelName "H233H" HorizSync 40-70 VertRefresh 60 Option "dpms" EndSection #Section "Monitor" # Identifier "Main2" # VendorName "Acer" # ModelName "AL2216W" # HorizSync 40-70 # VertRefresh 60 # Option "dpms" #EndSection Section "Monitor" Identifier "Projector" VendorName "BenQ" ModelName "W500" HorizSync 44.955-45 VertRefresh 59.94-60 Option "dpms" EndSection Section "Device" Identifier "Card1" Driver "nvidia" VendorName "nvidia" BusID "PCI:5:0:0" BoardName "nVidia Corporation G92 [GeForce 9800 GTX+]" Option "ConnectedMonitor" "DFP,DFP" Option "NvAGP" "0" Option "NoLogo" "True" #Option "TVStandard" "HD720p" EndSection Section "Device" Identifier "Card2" Driver "nvidia" VendorName "nvidia" BusID "PCI:4:0:0" BoardName "nVidia Corporation G71 [GeForce 7900 GT/GTO]" Option "NvAGP" "0" Option "NoLogo" "True" Option "TVStandard" "HD720p" EndSection Section "Module" Load "glx" EndSection Section "Screen" Identifier "ScreenMain-0" Device "Card1-0" Monitor "Main1" DefaultDepth 24 Option "Twinview" Option "TwinViewOrientation" "RightOf" Option "MetaModes" "DFP-0: 1920x1080; DFP-1: 1680x1050" Option "HorizSync" "DFP-0: 40-70; DFP-1: 40-70" Option "VertRefresh" "DFP-0: 60; DFP-1: 60" #SubSection "Display" # Depth 24 # Virtual 4880 1080 #EndSubSection EndSection Section "Screen" Identifier "ScreenProjector" Device "Card2" Monitor "Projector" DefaultDepth 24 Option "MetaModes" "TV-0: 1280x720" Option "HorizSync" "TV-0: 44.955-45" Option "VertRefresh" "TV-0: 59.94-60" EndSection Section "ServerLayout" Identifier "BothTwinView" Screen "ScreenMain-0" Screen "ScreenProjector" LeftOf "ScreenMain-0" #Option "Xinerama" "on" # most important option let you window expand to three monitors EndSection

    Read the article

  • Recovering from backup without original install media

    - by KGendron
    A machine from my old job had a complete hard drive failure. I have backups but I'm running into severe problems restoring from them. The only install media was a secondary restore partition on the system's hard drive. I hate whoever came up with that idea more than i can possibly express with words. I spent several days trying to recover the disk - it is pretty well shot and none of my best tricks could even get it to show up in the bios/ The machine that broke is an hp with xp media center edition on it (I don't know why either). The backups were created using the default windows backup tool - I have .bfk file on an external hardrive that i am trying to restore from. I've replaced the hard drive. My home machine is running windows 7 64bit and i'm trying to use it as a platform to restore to the other disk. I downloaded the window 7 nt-restore utility, however no matter what i do it restores to my C drive rather than the specified drive. Fortunately win7 security settings prevented it from being a complete disaster - but still not a happy thing. I tried firing up the xp virtual machine. I can browse to the backups but it says they are invalid and refuse to let me view/ continue with the restore. I tried installing XP to an extra harddrive on my machine - however it bluescreens on me during the install process and I cry. I tried installing xp pro to the new drive and attempted to restore over it, it of course blackscreened on me as that was a stupid idea. I made two partitions on the new hard drive (Apparently the bios on this accursed piece of junk doesn't allow hd partitions larger than 200G anyways and thus fails 40 minutes into the install with an ever-descriptive "Disk Read Error". Guess how i spent last weekend? My last idea was to install xp pro to the second partition and then use it to restore from backup to the first. After the first restart it gives me the error "Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware". My brain made one of those bad hard drive clicky noises. I've tried several boot disks but they don't seem to work. If anyone has a link to a good one it would be greatly appreciated. Anyone have any more ideas? - I really hate asking on what seems like such a simple issue but i am quite literally at my wit's end. Thanks - and sorry for the really long post.

    Read the article

  • VirtualBox Port Forward not working when Guest IP *IS* specified (while doc says opposite)

    - by Patrick
    Trying to port forward from host (Mac OS X) 127.0.0.1:8282 - guest (CentOS)'s 10.10.10.10:8080. Existing port forwards include 127.0.0.1:8181 and 9191 to guest without any IP specified (so whatever it gets through DHCP, as explained in the documentation). Here is how the non-working binding was added: VBoxManage modifyvm "VM name" --natpf1 "rule3,tcp,127.0.0.1,8282,10.10.10.10,8080" Here is how the working ones were added: VBoxManage modifyvm "VM name" --natpf1 "rule1,tcp,127.0.0.1,8181,,80" VBoxManage modifyvm "VM name" --natpf1 "rule2,tcp,127.0.0.1,9191,,9090" And by "non-working", I of course mean not listening (as a prerequisite to forwarding): $ lsof -Pi -n|grep Virtual|grep LISTEN VirtualBo 27050 user 21u IPv4 0x2bbdc68fd363175d 0t0 TCP 127.0.0.1:9191 (LISTEN) VirtualBo 27050 user 22u IPv4 0x2bbdc68fd0e0af75 0t0 TCP 127.0.0.1:8181 (LISTEN) There should be a similar line above but with 127.0.0.1:8282. Just to be clear, this port is listening perfectly fine on the guest itself. And when I remove the guest IP (i.e., clear the 10.10.10.10) the forward works fine, albeit to eth0 (not eth1 where I need it). I can tcpdump and watch the traffic flow back and forth. And yes, I've disabled iptables entirely while testing -- it's not getting blocked anywhere on the guest. As VirtualBox writes in their documentation, you are required to specify the guest IP if it's static (makes sense, no DHCP record it keeps): "If for some reason the guest uses a static assigned IP address not leased from the built-in DHCP server, it is required to specify the guest IP when registering the forwarding rule:". However, doing so (as I need to), seems to break the port forward with nary a report in any log file I can find. (I've reviewed everything in ~/Library/VirtualBox/). Other notes: While I used the above command to add the third rule, I've also verified it showed up correctly in GUI and then removed/re-added from there just to make sure). This forum link -- while very dated -- looks somewhat related in that a port forward to a static IP was not appearing (perhaps they think due to lack of gratuitous arp being sent for host to know IP is there/avail?). Anyway, what gives? Is this still buggy? Any suggestions? If not, easy enough workarounds? What's interesting is that this works perfectly fine on another user's Mac, however he's running a slightly older version (4.3.6 v. 4.3.12).

    Read the article

  • Can not get sound over hdmi in kubuntu 9.10

    - by user32509
    I have used a hdmi cable to connect my lcd (which is connected with my speakers) with my nvida 275 gtx grafic card. I can not get the sound output to work. The hardware itself is working probably - I tested it under windows. Currently I am running Kubuntu 9.10 64 with Nvidia 190.53. The sound output worked fine before I installed the hdmi connection. (German output - i can change it, if you tell me how :)) aplay -l **** Liste von PLAYBACK Geräten **** Karte 0: Intel [HDA Intel], Gerät 0: ALC889A Analog [ALC889A Analog] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 Karte 0: Intel [HDA Intel], Gerät 1: ALC889A Digital [ALC889A Digital] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 aplay -L front:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog Front speakers surround40:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Intel,DEV=0 HDA Intel, ALC889A Digital IEC958 (S/PDIF) Digital Audio Output null Discard all samples (playback) or generate zero samples (capture) pulse Playback/recording through the PulseAudio sound server And i disabled mute in kmix an all channels :) Edit: lspci -v ... 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) Subsystem: Giga-byte Technology Device a022 Flags: bus master, fast devsel, latency 0, IRQ 22 Memory at ea400000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel <?> Capabilities: [130] Root Complex Link <?> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel ... cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.20. lsmod | grep snd_hda_intel snd_hda_intel 31880 2 snd_hda_codec 87584 2 snd_hda_codec_realtek,snd_hda_intel snd_pcm 93160 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss snd 77096 16 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device snd_page_alloc 10928 2 snd_hda_intel,snd_pcm I think I am missing the something-hdmi module? Is there such a thing?

    Read the article

  • Undelivered Mail Returned to Sender

    - by Alex
    When sending to [email protected] via PHP mail() function, I receive mails. When sending emails from external machines, I receive the following (e.g., sending from [email protected]. [mail.ru is Russian gmail]): This is the mail system at host fallback2.mail.ru. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: lost connection with mail.mydomain.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Reporting-MTA: dns; fallback2.mail.ru X-mPOP-Fallback_MX-Queue-ID: D8C19F2411F1 X-mPOP-Fallback_MX-Sender: rfc822; [email protected] Arrival-Date: Tue, 29 Oct 2013 10:09:21 +0400 (MSK) Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 4.4.2 Diagnostic-Code: X-mPOP-Fallback_MX; lost connection with mail.tld.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Here is my postfix main.cf: command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix myhostname = mail.mydomain.com mydomain = mydomain.com myorigin = mydomain.com inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 in_flow_delay = 1s alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mail_name = mydomain.com daemon debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES bounce_queue_lifetime = 4h maximal_queue_lifetime = 4h delay_warning_time = 1h strict_rfc821_envelopes = yes show_user_unknown_table_name = no allow_percent_hack = no swap_bangpath = no smtpd_delay_reject = yes smtpd_error_sleep_time = 20 smtpd_soft_error_limit = 1 smtpd_hard_error_limit = 3 smtpd_junk_command_limit = 2 mydestination = mydomain.com, localhost.localdomain, localhost smtpd_client_restrictions = permit_inet_interfaces smtpd_recipient_limit = 100 virtual_alias_domains = mydomain.com virtual_alias_maps = hash:/etc/postfix/virtual smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination Why emails from external server are not being delivered? Thank you! Update In a log, the following lines appear a lot of times Oct 30 10:48:29 mydomain postfix/smtpd[16216]: connect from fallback5.mail.ru[94.100.176.59] Oct 30 10:48:29 mydomain postfix/smtpd[16216]: warning: SASL: Connect to private/auth failed: Connection refused Oct 30 10:48:29 mydomain postfix/smtpd[16216]: fatal: no SASL authentication mechanisms It appears I have to configure SASL? I would understand if I would like to send emails from postfix, but why do I need it to receive emails?

    Read the article

  • Linux clock loses 10 minutes every week

    - by PaKempf
    One of my linux server's clock loses 10 minutes every now and then, nearly every week. I update the time so it stays correct, and although it doesn't really bother me, i'd like to fix it. I've been searching around a bit. Nothing can be responsible in the crontab, and i can't find any related message in the logs. Some people seem to use ntp to fix that kind of issue, but i'd prefer not to use an unecessary component on it. Uname result : Linux unis-monitor 2.6.32-5-686 #1 SMP Mon Feb 25 01:04:36 UTC 2013 i686 GNU/Linux Cat message : cat messages Jul 14 06:25:06 unis-monitor rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="882" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Jul 15 06:25:05 unis-monitor rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="882" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Cat syslog cat syslog Jul 15 06:25:05 unis-monitor rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="882" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Jul 15 06:39:01 unis-monitor /USR/SBIN/CRON[15272]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jul 15 07:09:01 unis-monitor /USR/SBIN/CRON[15465]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jul 15 07:17:01 unis-monitor /USR/SBIN/CRON[15521]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 15 07:39:01 unis-monitor /USR/SBIN/CRON[15662]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jul 15 08:09:01 unis-monitor /USR/SBIN/CRON[15855]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jul 15 08:17:01 unis-monitor /USR/SBIN/CRON[15911]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 15 08:39:01 unis-monitor /USR/SBIN/CRON[16052]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jul 15 09:09:01 unis-monitor /USR/SBIN/CRON[16273]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) So if you have any clue of where to look or what i could use to monitor those date change ? Here is some more infos : the server is a virtual server hosted on HyperV on a win 2012 server. Don't know if it changes anything, seen the other servers hosted don't have this issue...

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • nginx rewrite for wikkawiki

    - by Hans
    Just setup WikkaWiki on my server, I have been trying to have the links go from wiki.mysite.info/wikka.php?wakka=Start into wiki.mysite.info/DotMG. I tried following their guide at http://docs.wikkawiki.org/ModRewrite, however it seems incomplete and outdated. Furthermore, as of version 1.3.2 base_url isn't even manually configurable from the wikka.config.php file. I am using version 1.3.2 of WikkaWiki. My nginx virtual hosts file contains: server { listen 80; server_name wiki.mysite.info; root /usr/share/nginx/wikka/; access_log /usr/share/nginx/.access/wikka; error_log /usr/share/nginx/.error/wikka error; location / { index index.php; try_files $uri $uri/ @wikka; } location @wikka { rewrite ^(.*/[^\./]*[^/])$ $1/ last; rewrite ^(.*)$ /wikka.php?wakka=$1 last; } location ~* \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Thus far it works, I can go to wiki.mysite.info/APage and it'll display that page, however it doesn't work on all pages, sometime the browser simply downloads the page (For some reason it always downloads the Start page). Also when I go to wiki.mysite.info/ it downloads the wikka.php file... Furthermore, the links on the wiki have the wikka.php?wakka= so whenever I navigate around the wiki, it goes back to being wiki.mysite.info/wikka.php?wakka=APage. I think something is wrong with my rewrite but I can't say for sure. Contents of the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • OpenVPN Chaining

    - by noderunner
    I'm trying to set up an OpenVPN "chain", similar to what is described here. I have two separate networks, A and B. Each network has an OpenVPN server using a standard "road warrior" or "client/server" approach. A client can connect to either one for access to the hosts/services on that respective network. But server A and B are also connected to each other. The servers on each network have a "site-to-site" connection between the two. What I'm trying to accomplish, is the ability to connect to network A as a client, and then make connections with hosts on network B. I'm using tun/routing for all of the VPN connections. The "chain" looks something like this: [Client] --- [Server A] --- [Server A] --- [Server B] --- [Server B] --- [Host B] (tun0) (tun0) (tun1) (tun0) (eth0) (eth0) The whole idea is that server A should route traffic destined to network B through the "site-to-site" VPN set up on tun1 when a client from tun0 tries to connect. I did this simply by setting up two connection profiles on server A. One profile is a standard server config running on tun0, defining a virtual client network, IP address pool, pushing routes, etc. The other is a client connection to Server B running on tun1. With ip_forwarding enabled, I then simply added a "push route" to the clients advertising a route to network B. On server A, this seems to work when I look at tcpdump output. If I connect as a client, and then ping a host on network B, I can see the traffic getting passed from tun0 to tun1 on Server A: tcpdump -nSi tun1 icmp The weird thing is that I don't see Server B receiving that traffic through the tunnel. It's as if Server A is sending it through the site-to-site connection like it should, but server B is completely ignoring it. When I look for the traffic on Server B, it simply isn't there. A ping from Server A -- Host B works fine. But a ping from a client connected to Server A to host B does not. I'm wondering if Server B is ignoring the traffic because the source IP does not match the client IP pool that it hands out to clients? Does anyone know if I need to do something on Server B in order for it to see the traffic? This is a complicated problem to explain, so thanks if you stuck with me this far.

    Read the article

  • Why is Linux choosing the wrong source ip address

    - by Scheintod
    and what to do to let it choose the right one? This all happens inside an OpenVZ container: The Host is Debian/Wheezy with Redhat/OpenVZ Kernel: root@mycl2:~# uname -a Linux mycl2 2.6.32-openvz-042stab081.5-amd64 #1 SMP Mon Sep 30 16:40:27 MSK 2013 x86_64 GNU/Linux The container has two (virtual) network interfaces. One in public and one in private address-space: root@mycl2:~# ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:775 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:32059 (31.3 KiB) TX bytes:56309 (54.9 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:80.123.123.29 P-t-P:80.123.123.29 Bcast:80.123.123.29 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.0.1.29 P-t-P:10.0.1.29 Bcast:10.0.1.29 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 The route to the private network is set manually: root@mycl2:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 venet0 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0 Tring to ping others on the private network leads to the wrong source address been choosen: root@mycl2:~# ip route get 10.0.1.26 10.0.1.26 dev venet0 src 80.123.123.29 cache mtu 1500 advmss 1460 hoplimit 64 Why is this and what can I do about it? EDIT: If I create the route with (thanks to Joshua) ip route add 10.0.0.0/8 dev venet0 src 10.0.1.29 it is working. But according to man ip-route the src parameter should only set the source-ip if this route is chosen. But if this route is chosen then the source-ip would be that anyway.

    Read the article

  • Nginx Load Balancer 403 error

    - by user64473
    I am trying to install nginx as a load balancer with apache backends, so that when I point my sites to the nginx server it serves up the content from the apache backend. I have the apache configuration set up correctly on both (i.e when I go to the site on the apache servers it works great) but when I use the nginx load balancer as the site I get 403 error. I have no idea why as it isn't even accessing any files on the server, thusly there aren't any files to be forbidden access to. My virtual host is enabled and looks like this: upstream webs { server 10.0.0.30 weight=1; server 10.0.0.31 weight=1; } server { listen 80; server_name www.example.com example.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://webs; include /etc/nginx/proxy.conf; } } and my nginx.conf looks like this: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } Can any geniuses out there tell me what I am doing wrong?

    Read the article

  • Asterisk server firewall script allows 2-way audio from incoming calls, but not on outgoing?

    - by cappie
    I'm running an Asterisk PBX on a virtual machine directly connected to the Internet and I really want to prevent script kiddies, l33t h4x0rz and actual hackers access to my server. The basic way I protect my calling-bill now is by using 32 character passwords, but I would much rather have a way to protect The firewall script I'm currently using is stated below, however, without the established connection firewall rule (mentioned rule #1), I cannot receive incoming audio from the target during outgoing calls: #!/bin/bash # first, clean up! iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD DROP # we're not a router iptables -P OUTPUT ACCEPT # don't allow invalid connections iptables -A INPUT -m state --state INVALID -j DROP # always allow connections that are already set up (MENTIONED RULE #1) iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # always accept ICMP iptables -A INPUT -p icmp -j ACCEPT # always accept traffic on these ports #iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT # always allow DNS traffic iptables -A INPUT -p udp --sport 53 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT # allow return traffic to the PBX iptables -A INPUT -p udp -m udp --dport 50000:65536 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 10000:20000 -j ACCEPT iptables -A INPUT -p udp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -p tcp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -m multiport -p udp --dports 10000:20000 iptables -A INPUT -m multiport -p tcp --dports 10000:20000 # IP addresses of the office iptables -A INPUT -s 95.XXX.XXX.XXX/32 -j ACCEPT # accept everything from the trunk IP's iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT # accept everything on localhost iptables -A INPUT -i lo -j ACCEPT # accept all outgoing traffic iptables -A OUTPUT -j ACCEPT # DROP everything else #iptables -A INPUT -j DROP I would like to know what firewall rule I'm missing for this all to work.. There is so little documentation on which ports (incoming and outgoing) asterisk actually needs.. (return ports included). Are there any firewall/iptables specialists here that see major problems with this firewall script? It's so frustrating not being able to find a simple firewall solution that enabled me to have a PBX running somewhere on the Internet which is firewalled in such a way that it can ONLY allows connections from and to the office, the DNS servers and the trunk(s) (and only support SSH (port 22) and ICMP traffic for the outside world). Hopefully, using this question, we can solve this problem once and for all.

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • Windows Update and IE fail to connect, but Chrome fine?

    - by I Gottlieb
    Out of ideas on this one. (Running Windows Vista.) I have a program that accesses the internet to retrieve financial market data. One day it tells me that it can't log in -- timeout error. I check the documentation and it says must have a working copy of IE browser installed. I check IE (have IE9) and sure enough -- it just spins. No error message, not timeout, no 'try later' -- just spins -- as far as I can tell, indefinitely. Any page, any address. Even access to a localhost site just spins. Chrome works fine. So does another program I have that fetches market data. Windows 'diagnose and repair' says my internet connection is working fine. I tried uninstall/re-install of IE. Same spinning. I tried to install Windows Updates, and guess what? I can't. I comes up with error 80072efd; checked documentation for the error and it says I should check firewall blockage. Thing is, the only firewall I have is Windows Firewall, and obviously it wouldn't be blocking Windows Update. In contrast, Windows 'Help' in all programs has no problem accessing the Internet. I had a filter on the internet connection, and this was updated just prior to first appearance of the problem. But I uninstalled the filter entirely (official, with passwd from the company's service rep) -- and no difference. I'm guessing that a high level Windows network service file is corrupted -- used only by MS programs and their ilk, but how do I find it? I'd like to avoid having to do a clean install of Windows. Much obliged for any insight. IG Ramhound -- Thanks for reply. I'm familiar with virtual machines as in e.g. JVM or an emulator for an alternative architecture or (theoretical) Turing Machine equivalence. But I'm not familiar with the way you're using the term. Please clarify -- what one needs for this VM 'test' and why you expect it will provide an advantage of insight into the problem. And what sort of 'configuration issue' are you referring to? IG

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • Compiling JS-Test-Driver Plugin and Installing it on Eclipse 3.5.1 Galileo?

    - by leeand00
    I downloaded the source of the js-test-driver from: http://js-test-driver.googlecode.com/svn/tags/1.2 It compiles just fine, but one of the unit tests fails: [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0.012 sec [junit] Test com.google.jstestdriver.eclipse.ui.views.FailureOnlyViewerFilterTest FAILED I am using: - ANT 1.7.1 - javac 1.6.0_12 And I'm trying to install the js-test-driver plugin on Eclipse 3.5.1 Galileo Despite the failed test I installed the plugin into my C:\eclipse\dropins\js-test-driver directory by copying (exporting from svn) the compiled feature and plugins directories there, to see if it would yield any hints to what the problem is. When I started eclipse, added the plugin to the panel using Window-Show View-Other... Other-JsTestDriver The plugin for the panel is added, but it displays the following error instead of the plugin in the panel: Could not create the view: Plugin com.google.jstestdriver.eclipse.ui was unable to load class com.google.jstestdriver.eclipse.ui.views.JsTestDriverView. And then bellow that I get the following stack trace after clicking Details: java.lang.ClassNotFoundException: com.google.jstestdriver.eclipse.ui.views.JsTestDriverView at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:494) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:398) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:105) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:326) at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:231) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1193) at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:160) at org.eclipse.core.internal.registry.ExtensionRegistry.createExecutableExtension(ExtensionRegistry.java:874) at org.eclipse.core.internal.registry.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:243) at org.eclipse.core.internal.registry.ConfigurationElementHandle.createExecutableExtension(ConfigurationElementHandle.java:51) at org.eclipse.ui.internal.WorkbenchPlugin$1.run(WorkbenchPlugin.java:267) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPlugin.createExtension(WorkbenchPlugin.java:263) at org.eclipse.ui.internal.registry.ViewDescriptor.createView(ViewDescriptor.java:63) at org.eclipse.ui.internal.ViewReference.createPartHelper(ViewReference.java:324) at org.eclipse.ui.internal.ViewReference.createPart(ViewReference.java:226) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.Perspective.showView(Perspective.java:2229) at org.eclipse.ui.internal.WorkbenchPage.busyShowView(WorkbenchPage.java:1067) at org.eclipse.ui.internal.WorkbenchPage$20.run(WorkbenchPage.java:3816) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPage.showView(WorkbenchPage.java:3813) at org.eclipse.ui.internal.WorkbenchPage.showView(WorkbenchPage.java:3789) at org.eclipse.ui.handlers.ShowViewHandler.openView(ShowViewHandler.java:165) at org.eclipse.ui.handlers.ShowViewHandler.openOther(ShowViewHandler.java:109) at org.eclipse.ui.handlers.ShowViewHandler.execute(ShowViewHandler.java:77) at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:294) at org.eclipse.core.commands.Command.executeWithChecks(Command.java:476) at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508) at org.eclipse.ui.internal.handlers.HandlerService.executeCommand(HandlerService.java:169) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.internal.ShowViewMenu$3.run(ShowViewMenu.java:141) at org.eclipse.jface.action.Action.runWithEvent(Action.java:498) at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:584) at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:501) at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:411) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3880) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3473) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Additionally, if I go to the settings in Window-Preferences and try to view the JS Test Driver Preferences, I get the following dialog: Problem Occurred Unable to create the selected preference page. com.google.jstestdriver.eclipse.ui.WorkbenchPreferencePage Thank you, Andrew J. Leer

    Read the article

  • no namenode error in pseudo-mode

    - by Anshu Basia
    I'm new to hadoop and is in learning phase. As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried http://localhost/50070....it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost/50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up. Below is the error log: ************************************************************/ 2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ubuntu/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.203.0 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011 ************************************************************/ 2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times **2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist. 2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/ Any help is appreciated Thank-you

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Django TemplateSyntaxError only on live server (templates exist)

    - by Tom
    I'm getting a strange error that only occurs on the live server. My Django templates directory is set up like so base.html two-column-base.html portfolio index.html extranet base.html index.html The portfolio pages work correctly locally on multiple machines. They inherit from either the root base.html or two-column-base.html. However, now that I've posted them to the live box (local machines are Windows, live is Linux), I get a TemplateSyntaxError: "Caught TemplateDoesNotExist while rendering: base.html" when I try to load any portfolio pages. It seems to be a case where the extends tag won't work in that root directory (???). Even if I do a direct_to_template on two-column-base.html (which extends base.html), I get that error. The extranet pages all work perfectly, but those templates all live inside the /extranet folder and inherit from /extranet/base.html. Possible issues I've checked: file permissions on the server are fine the template directory is correct on the live box (I'm using os.path.dirname(os.path.realpath(__file__)) to make things work across machines) files exist and the /templates directories exactly match my local copy removing the {% extends %} block from the top of any broken template causes the templates to render without a problem manually starting a shell session and calling get_template on any of the files works, but trying to render it blows up with the same exception on any of the extended templates. Doing the same with base.html, it renders perfectly (base.html also renders via direct_to_template) Django 1.2, Python 2.6 on Webfaction. Apologies in advance because this is my 3rd or 4th "I'm doing something stupid" question in a row. The only x-factor I can think of is this is my first time using Mercurial instead ofsvn. Not sure how I could have messed things up via that. EDIT: One possible source of problems: local machine is Python 2.5, live is 2.6. Here's a traceback of me trying to render 'two-column-base.html', which extends 'base.html'. Both files are in the same directory, so if it can find the first, it can find the second. c is just an empty Context object. >>> render_to_string('two-column-base.html', c) Traceback (most recent call last): File "<console>", line 1, in <module> File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader.py", line 186, in render_to_string return t.render(context_instance) File "/home/lightfin/webapps/django/lib/python2.6/django/template/__init__.py", line 173, in render return self._render(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/__init__.py", line 167, in _render return self.nodelist.render(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/__init__.py", line 796, in render bits.append(self.render_node(node, context)) File "/home/lightfin/webapps/django/lib/python2.6/django/template/debug.py", line 72, in render_node result = node.render(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader_tags.py", line 103, in render compiled_parent = self.get_parent(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader_tags.py", line 100, in get_parent return get_template(parent) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader.py", line 157, in get_template template, origin = find_template(template_name) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader.py", line 138, in find_template raise TemplateDoesNotExist(name) TemplateSyntaxError: Caught TemplateDoesNotExist while rendering: base.html I'm wondering if this is somehow related to the template caching that was just added to Django. EDIT 2 (per lazerscience): template-related settings: import os PROJECT_ROOT = os.path.dirname(os.path.realpath(__file__)) TEMPLATE_DIRS = ( os.path.join(PROJECT_ROOT, 'templates'), ) sample view: def project_list(request, jobs, extra_context={}): context = { 'jobs': jobs, } print context context.update(extra_context) return render_to_response('portfolio/index.html', context, context_instance=RequestContext(request)) The templates in reverse-order are: http://thosecleverkids.com/junk/index.html http://thosecleverkids.com/junk/portfolio-base.html http://thosecleverkids.com/junk/two-column-base.html http://thosecleverkids.com/junk/base.html though in the real project the first two live in a directory called "portfolio".

    Read the article

  • Thinking Sphinx not working in test mode

    - by J. Pablo Fernández
    I'm trying to get Thinking Sphinx to work in test mode in Rails. Basically this: ThinkingSphinx::Test.init ThinkingSphinx::Test.start freezes and never comes back. My test and devel configuration is the same for test and devel: dry_setting: &dry_setting adapter: mysql host: localhost encoding: utf8 username: rails password: blahblah development: <<: *dry_setting database: proj_devel socket: /tmp/mysql.sock # sphinx requires it test: <<: *dry_setting database: proj_test socket: /tmp/mysql.sock # sphinx requires it and sphinx.yml development: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin test: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin production: enable_star: 1 min_infix_len: 2 The generated config files, config/development.sphinx.conf and config/test.sphinx.conf only differ in database names, directories and similar things; nothing functional. Generating the index for devel goes without an issue $ rake ts:in (in /Users/pupeno/proj) default config Generating Configuration to /Users/pupeno/proj/config/development.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/development.sphinx.conf'... indexing index 'user_core'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.8% done total 7 docs, 422 bytes total 0.098 sec, 4320.80 bytes/sec, 71.67 docs/sec indexing index 'user_delta'... collected 0 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, nan% done total 0 docs, 0 bytes total 0.010 sec, 0.00 bytes/sec, 0.00 docs/sec distributed index 'user' can not be directly indexed; skipping. but when I try to do it for test it freezes: $ RAILS_ENV=test rake ts:in (in /Users/pupeno/proj) DEPRECATION WARNING: require "activeresource" is deprecated and will be removed in Rails 3. Use require "active_resource" instead.. (called from /Users/pupeno/.rvm/gems/ruby-1.8.7-p249/gems/activeresource-2.3.5/lib/activeresource.rb:2) default config Generating Configuration to /Users/pupeno/proj/config/test.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/test.sphinx.conf'... indexing index 'user_core'... It's been there for more than 10 minutes, the user table has 4 records. The database directory look quite diferently, but I don't know what to make of it: $ ls -l db/sphinx/development/ total 96 -rw-r--r-- 1 pupeno staff 196 Mar 11 18:10 user_core.spa -rw-r--r-- 1 pupeno staff 4982 Mar 11 18:10 user_core.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_core.sph -rw-r--r-- 1 pupeno staff 3067 Mar 11 18:10 user_core.spi -rw-r--r-- 1 pupeno staff 84 Mar 11 18:10 user_core.spm -rw-r--r-- 1 pupeno staff 6832 Mar 11 18:10 user_core.spp -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spa -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_delta.sph -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spi -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spm -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spp $ ls -l db/sphinx/test/ total 0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.spl -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp1 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp2 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp7 Nothing gets added to a log when this happens. Any ideas where to go from here? I can run the command line manually: /opt/local/bin/indexer --config config/test.sphinx.conf --all which generates the output as the rake ts:in, so no help there.

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >