Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 539/715 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • flowchart for debugging a slow/unresponsive server

    - by davidosomething
    So the server is slow: Roll back to the previous known working build - Success? Code problem - Fail? Go on. Ping ip address - Success? maybe a DNS problem, go on. - Fail? Server or connection problem, go on. Ping and tracert your domain.com from inside your network - previous success - fail: DNS problem - success? go on. - previous fail and: - Fail? Go on, could be you or network. - Success? Go on. Try it from outside your network (http://centralops.net/co/) - Fail? The server's network connection sucks. - Success? If inside network was fail, your network sucks. Check the server load: CPU/RAM usage. Is it overloaded? - Yes. Who's the culprit? Kill some processes/reboot. - No? Go on. what other steps should i add?

    Read the article

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • Black Screen on Logon (windows 7 home premium)

    - by Blacknight334
    i have been having some trouble with a dell Xps 15 laptop that i recently purchased. it is under a month old, and a problem has occurred, upon logging on just after start up, the computer will just sit on a black log on screen (with the mouse still visible and active) for a few minutes. it is extremely annoying, especially when im in a rush. the laptop is under a month old. So far, i have tried to update the drivers, all windows update, and still, nothing. also, it doesnt seem to do it when i log into safe mode, or if it does, it will do it for less than 10 seconds, then load the desktop (in normal boot, it usually takes a few minutes). i have also run a number of the inbuilt diagnostics, but found no errors. i want to avoid having to do a system restore for as long as i can. does anyone know anything that can help? (the laptop is running a 500gb SSD, 2gb Nvidia 640m, 8gb ram, 3rd gen i7 quad core with 8 threads) thanks.

    Read the article

  • Alternative software for Pinnacle PCTV 100e

    - by Stijn Sanders
    I have a Pinnacle PCTV 100e external USB cable television receiver. I've been using Pinnacle's software that came with the card (TVCenter Pro) to record things at given times. Things I don't like is an extremely high CPU load, and that it doesn't seem to halt the screensaver from running when watching in full screen. Also, I was away the last two weeks, and the schedules went terribly bust. Some items were recorded hours before or after the actual scheduled time (and now I missed some shows), and some recurring schedules weren't converted into the next occurrence correctly! Is there good alternative software that would work with my PCTV 100e? (Preferalby cheap or free) I've tried VLC Player, which gets video, but no audio. I've tried MediaPortal, which crashes when trying to scan for channels. When I select a channel manually, the stored mpg has big errors in encoding and is also missing audio. There's VirtualDub, but that doesn't have ready-made scheduled-recording options. This I can conjure some scheduled scripts for, but I've noticed the sync gets awfully wrong after some time. I've tried Windows Media Center, but it doesn't seem to support the PCTV 100e.

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • Windows XP Setup Fails to Recognize USB Floppy after formatting AHCI disk

    - by Strahn
    I am attempting to install Windows XP Professional x64 onto a HP EliteBook 8540w. I have downloaded both the latest Intel Rapid Storage Technology drivers and the Intel Storage Matrix drivers that are listed on HPs website and copied the drivers over to a floppy disk (two separate floppies, one for each version of the drivers.) Booting to my WinXP Pro x64 install CD, I go through the F6 process, load the driver and am able to see my HDD, delete, create and format partitions on it. When I go to continue the install, after checking the disk, the system asks me to enter the disk labeled "Intel Rapid Storage Technology" and press enter to continue. Nothing happens at this point when I press enter. This happens if I use the latest drivers or the older drivers. We have created a slipstreamed install CD using nLite that has the AHCI drivers integrated, which installs fine. However, we have identified a number of issues with the system that I believe are side-effects of using nLite for the slipstreaming and I am attempting to verify that. I have researched this issue and found a few examples of others having the same problem, but no solution. The USB floppy is a Lacie branded floppy, connecting it to a working XP workstation shows it to be the Y-E Data USB floppy drive that is supposedly 100% compatible with XP per MS KB 916196.

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • Get Illegal Instruction error when booting Linux in VirtualBox, works fine when booted directly

    - by rkjnsn
    I have a computer on which I am dual booting Windows 7 and Gentoo Linux (both 64-bit). I want to be able to load up my Linux installation in a VM while I am booted into Windows. I have installed VirtualBox and followed the instructions for creating a raw disk VMDK. When I start the VM, Linux starts booting, but then fails with the following error when unlocking my root partition: truecrypt[441] trap invalid opcode ip:373615538e0 sp:3dd0e0dfb60 error:0 in libpixman-1.so.0[373614d6000+8d000] Everything works fine when I boot into Linux directly. What could cause an illegal instruction to be hit in libpixman only when booting in VirtualBox? Update: As a troubleshooting step, I recompiled pixman without "-march", and no longer get an illegal instruction error in that library. (The boot fails in the same spot with the same error in a different library, however.) How can I determine the specific opcode that isn't working in VirtualBox so I can disable it in my CFLAGS without having to disable all CPU-specific optimizations? I am still confused as to why there would be any user-mode instruction that would fail to work in a VM. Is this a known limitation? My CPU is an Intel Core i7 3720QM, and I have hardware virtualization support enabled.

    Read the article

  • Optimize apache for 10K+ wordpress views a day on 2GB RAM E6500 CPU

    - by Broke artist
    I have a dedicated server with apache/php on ubuntu serving my Wordpress blog with about 10K+ pageviews a day. I have W3TC plug in installed with APC. But every now and then server stop responding or goes dead slow and i have to restart apache to get it back. Heres my config what am i doing wrong? ServerRoot "/etc/apache2" LockFile /var/lock/apache2/accept.lock PidFile ${APACHE_PID_FILE} TimeOut 40 KeepAlive on MaxKeepAliveRequests 200 KeepAliveTimeout 2 StartServers 5 MinSpareServers 5 MaxSpareServers 8 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 StartServers 3 MinSpareServers 3 MaxSpareServers 3 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 StartServers 3 MinSpareServers 3 MaxSpareServers 3 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess Order allow,deny Deny from all Satisfy all DefaultType text/plain HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel error Include /etc/apache2/mods-enabled/.load Include /etc/apache2/mods-enabled/.conf Include /etc/apache2/httpd.conf Include /etc/apache2/ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %O" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined Include /etc/apache2/conf.d/ Include /etc/apache2/sites-enabled/

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • MGE UPS Cut power - What happened?

    - by JT.WK
    I have 3 x MGE Pulsar M 3000 2700w UPS units within my server room which have run perfectly up until now. On Saturday morning I noticed that one of these UPS units was no longer outputting power, the lcd displayed a message saying "load not powered" and told me to press the power button to start output. Needless to say that the servers, switches and routers is was supporting were all turned off. I tried pressing and even holding the power button, but the unit refused to start back up again. Only power cycling the unit got it back up again. I have checked the logs on the UPS, although they were useless. Nothing out of the ordinary, and no email notifications had been sent. The output level sits on about 51% and all battery checks are OK. It is now three days on and the UPS is still up and running (although I am scheduling an outage to get it out of there ASAP). Does anyone have any idea what could have gone wrong here? Is there anything else that I can check that could help?

    Read the article

  • Computer resetting semi-randomly

    - by Peter
    Hi, I'm having a problem with my desktop whereby it sometimes resets itself semi-randomly. For example, I'll switch it on, it'll boot an OS and shortly after getting to the desktop it will immediately reset with no warning. The time isn't consistent - sometimes it does it before reaching login. I'm pretty sure it's not an OS thing; have tried Ubuntu and a Windows install and both exhibit it. It also doesn't appear to be heat-related because sometimes it appears to be able to "get past" it and will then run stably even under load; if anything it seems to be worse from a cold start. My gut feeling is some kind of power issue but I'm clutching at straws a little. Any suggestions on how I could go about testing it or trying to narrow the problem down would be appreciated. The machine is four years old now so while I can replace components if needed, it's not worth enough that I'm comfortable buying new parts without being pretty confident that they'll fix the problem. Thanks in advance for any help :) Edit: Okay, the motherboard is a MSI K8N SLI; CPU is an Athlon64 X2 4200+. Has one video card, a GeForce 7800GT. 1GB RAM, not sure of brand; 3 hard drives, two SATA and one PATA. Flashed motherboard to latest BIOS some time ago. Edit the Second: I thought I'd narrowed it down to the PSU for a while, but then it recurred again. I ended up pulling everything out but CPU, RAM and motherboard and it still seems to be stuffed (if anything, it's gotten worse in the last couple of days). I assume it's one of those three components, but the machine is old enough that I don't really want to spend money replacing any of them. So thanks for everyone's suggestions; much appreciated!

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • Google Chrome doesn't want to access Facebook

    - by Pieter van Niekerk
    I have been experiencing a bit of a problem with Chrome over the last couple of days where it doesn't want to access Facebook. When I open Chrome it works fine for a while and then if I were to refresh the page it would give me the Chrome 'This webpage is not available' message. This webpage is not available Google Chrome could not load the webpage because www.facebook.com took too long to respond. The website may be down, or you may be experiencing issues with your Internet connection. Here are some suggestions: Reload this webpage later. Check your Internet connection. Restart any router, modem, or other network devices you may be using. Add Google Chrome as a permitted program in your firewall's or antivirus software's settings. If it is already a permitted program, try deleting it from the list of permitted programs and adding it again. If you use a proxy server, check your proxy settings or contact your network administrator to make sure the proxy server is working. If you don't believe you should be using a proxy server, adjust your proxy settings: Go to the wrench menu Options Under the Hood Change proxy settings... LAN Settings and deselect the "Use a proxy server for your LAN" checkbox. This problem only persists when using the proxy and doesn't occur at all when not on the proxy. I have also tried different browsers (IE9 and Firefox 9.01) but it doesn't occur in any of them. This problem goes away for a while when I restart Chrome, only to happen again a couple of minutes later. I have tried deleting the cookies for Facebook without restarting Chrome, but to no avail. I am using Windows7 with Chrome 17

    Read the article

  • Not able to find scripts present in /etc/profile.d directory [on hold]

    - by priya
    I am using Red Hat Linux 6.0 ... using davinchi board. I have to change system clock resolution so I am changing (HZ) env var. For this I have written script so that I can change HZ = 1000 n insert that script in /etc/profile.d and write code for loop in /etc/profile so that while running as usual /etc/profile can load the scripts present in /etc/profile.d. But when I am logging into the system at root level then showing error as "-bash: ./etc/profile.d/resolution.sh(my script name): No such file or directory Also here why it is showing ./etc and not /etc . Is something related to that?? Also I tried to add script in /etc/init.d but still no change in value of HZ takes place. Please tell where to change so that this env var can get changed. The script(resolution.sh) written has :- #!/bin/bash export HZ=1000 The content of /etc/profile which I entered is: if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then .$i fi done unset i fi And the output of grep command is -rw-r--r-- 1 root root 535 Feb 4 2004 profile -rwxr-xr-x 2 root root 4096 Feb 2 2004 profile.d

    Read the article

  • RDP or SSH connection trough Windows 2008 server VPN hang after a while

    - by xt4fs
    I have been experiencing a very strange issue with our VPN setup on Windows Server 2008. That server is running as a Xen Virtual Machine. We use it for two purposes, permit our mobile workers to connect to another server hosted somewhere else that only allow that ip, and use it to RDP or ssh to many other virtual machine on the same server. The server has no performance issue and still a load of memory free. All other virtual machine has no problem whatsoever. Many of those virtual machine have public IP (web servers) and all their firewall are set to allow only ssh connection or RDP connection from their local interface. When I am connecting directly with either ssh or RDP to one of the other virtual machine everything run without any issues. However, when I am doing so through the VPN after some time the connection just hang, it usually continue after some time (5 or 10 minutes). It seems as more there is network usage more often it happen to a point where it is completely unusable. The worst thing I can do to hang it faster is to actually ping the vpn client IP from the local network, after some time the latency increase until it hang. This happen even if I do RDP to the local ip of the VPN server trough the VPN. The server report no problem and if I disconnect to the vpn and reconnect right away everything is alright. There is nothing wrong in the VPN server log. I have taught at the beginning that it could have been an issue with the Host server so I try to RDP,ssh directly to the guest and I have experience no issue while doing this, so it really seems to be a problem with the VPN server on Windows server 2008. Another very weird thing is it does not seems to be of any issue if you only do Internet (NAT) without trying to connect to any local ips.

    Read the article

  • Experiences in Upgrading from Exchange 2003 to Exchange 2010

    - by gWaldo
    I'm currently running Exchange 2003 SP2 Cluster on a Server 2003 AD Forest (in native 2003 mode), and we beginning to plan the upgrade to Server 2008 AD and Exchange 2010. We have two main sites, one middle-sized office, and a couple of smaller sites which have DCs (which may be RODCs after the upgrade). Currently all of our Exchange cluster is in my main site, but we are considering using the new datastore paradigm for load-balance/failover at the other large site, but this is not set in stone. Right now we are in the information-gathering and planning phases. I am looking for input of any gotchas experienced while performing either upgrade, but especially the Exchange upgrade. Gotchas? What surprised you? What wasn't documented? What said one thing but was misleading? (Confusing either in content or severity.) What is great or horrible about the new system? What worked well? What worked poorly? If you were to do it over again...? (I know that this isn't so much a question that can be definitively answered, but I'm happy to reward insight and useful resources (not the Microsoft documentation, but Blogposts are welcome) with upvotes.) UPDATE A couple items of note: -We are not currently using OWA (currently only the admins), but it may become more of a consideration with iOS devices. -We do have a small number of Blackberries in the environment (< 10%). -In addition to the standard Exchange connectors, we have a third-party connector for Captaris RightFax integration.

    Read the article

  • .ashx cannot find type error on IIS7 , no problems on webdev server

    - by Aivan Monceller
    I am trying to make AspNetComet.zip work on IIS7 (a simple comet chat implementation) Here is a portion of my web.config. <system.web> <httpHandlers> <add verb="POST" path="DefaultChannel.ashx" type="Server.Channels.DefaultChannelHandler, Server"/> </httpHandlers> </system.web> <system.webServer> <handlers> <add name="DefaultChannelHandler" verb="POST" path="DefaultChannel.ashx" type="Server.Channels.DefaultChannelHandler, Server"/> </handlers> </system.webServer> When I publish the website on my localhost IIS7 I receive an error: POST http://localhost/DefaultChannel.ashx 500 Internal Server Error Could not load type 'Server.Channels.DefaultChannelHandler The target framework of this project is .Net 2.0 I tried the Classic and Integrated Mode application pool for .Net 2.0 with no luck. I also tried converting the project to 4.0 and tried the Classic and Integrated Mode application pool for .Net 4.0 with no luck. I also tried adding the managed handler through IIS Manager's Handler Mappings. If you have time please download the source (184kb) to reproduce the problem on your own machine. The zip contains a VS2010 solution (.Net 2.0). You could also try to convert this to .Net 4.0 I am using Windows 7 anyway if that matters. If you need more details, please drop your comments below. This is working fine by the way on my webdev server.

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >