Search Results

Search found 51555 results on 2063 pages for 'application shutdown'.

Page 870/2063 | < Previous Page | 866 867 868 869 870 871 872 873 874 875 876 877  | Next Page >

  • First Request to IIS Express Fails with 503 Service Unavailable, Second Succeeds

    - by Chris Moschini
    Each time I start my ASP.Net MVC 3 app from Visual Studio 2010, IIS Express launches and IE spins waiting. The request fails with HTTP 503 Service Unavailable. I hit Refresh in IE, and the request succeeds. All subsequent requests succeed until I stop debugging. The next time I go to start debugging, the first request fails again. Has anyone else experienced this? In IISExpress\applicationhost.config I have: <site name="ProjectName" id="6"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="c:\users\chris\dropbox\code\2010\SolutionName\ProjectName" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:laptop" /> </bindings> </site> I have this in my hosts file: 127.0.0.1 laptop And my Project is set to start with IIS Express, with Project Url set to: http://laptop It's very strange that only the first request fails, perhaps as though Visual Studio isn't waiting long enough for IIS Express to start? Is there some way to make it wait? Stopping debugging, making a change, and then starting again is one of the most common tasks I do so adding another step to get there is pretty annoying.

    Read the article

  • Reverse Proxies and AJAX

    - by osij2is
    A client of ours is using IBM/Tivoli WebSEAL, a reverse-proxy server for some of their internal users. Our web application (ASP.NET 2.0) and is a fairly straightforward web/database application. Currently, our client users that are going through the WebSEAL proxy are having problems with a .NET 3rd party control. Users who are not going through the proxy have no issues. The 3rd party control is nothing more than an AJAX dynamic tree that on each click requests all the nodes for each leaf. Now our clients claim that once users click on a node in the control, the control itself freezes in such a way that they don't see anything populate. Users see "Loading..." message appear but no new activity there afterwards. They have to leave the page and go back to the original page in order to view the new nodes. I've never worked with a reverse proxy before so I have googled quite a bit on the subject even found an article on SF. IBM/Tivoli has mentioned this issue before but this is about all they mention at all. While the IBM doc is very helpful, all of our AJAX is from the 3rd party control. I've tried troubleshooting using Firebug but by not being behind the reverse proxy, I'm unable to truly replicate the problem. My question is: does anyone have experience with reverse proxies and issues with AJAX sites? How can I go about proving what the exact issue is? Currently we're negotiating remote access so assume for the greater part that I will have access to a machine that's using the WebSEAL proxy. P.S. I realize this question might teeter on the StackOverFlow/ServerFault jurisdictional debate, but I'm trying to investigate from the systems perspective. I have no experience with reverse proxies (and I'm unclear on the benefits) and little with forwarding proxies.

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • weblogic plug-in apache http server location directive question

    - by user39510
    We are using Weblogic Portal and Apache 2.x http server with the weblogic plug-in for apache for load-balancing. We have an application that right now can only be accessed from one of our managed servers. What I would like to do is use the Location directive to direct all requests for that page to the one managed server and I can't get it to work. The context that the portal tries to forward to is something like /MyWebApp?portalusername= (where equals a legitimate user. For example /MyWebApp?portalusername=joesmith. All other applications and the plug-in is load balancing as expected because every now and then you'll get sent to the second managed server for this particular application and its not deployed. I tried various things in the Apache http.conf like the following but can't seem to get it work. Any suggestions? The following is a snippet of the httpd.conf. Its a standard out of the box httpd.conf file with the weblogic plugin configuration. <Location /MyWebApp> SetHandler weblogic-handler WebLogicCluster myserver:7011 </Location> <Location /?> SetHandler weblogic-handler WebLogicCluster myserver:7011, myserver2:7012 </Location>

    Read the article

  • WebSeal and jsp content updated by Ajax

    - by lior chaga
    Hey, I have a problem running an application on environment with WebSeal. It is a web application with Java server that contains many parts that are replcaed within the page according to user input. For instance - a form called Outer.jsp may contain a form:options combo-box (by spring-forms), that uppon selection of an option, a certain Div is updated with a content produced by a jsp and fetched by an Ajax call (the ajax impementation in the client is done by Prototype JavaScript framework 1.5.1.2). Let's call the content fetched by ajax - Inner.jsp So Outer.jsp is fetching Inner.jsp, which in turn uses js functions in files included by the Outer.jsp. This, I think, is where my problem starts - Inner.jsp is not familiar with any of the functions included in Outer.jsp. And so, almost any operation performed by Inner.jsp is failing miserably. Needless to say - this works perfect when running on environment without WebSeal. Note that the scripting is enabled in WebSeal junction (with the -J option). I also see that the content returned by the Ajax call includes a document.cookie added by WebSeal (not sure it matters to this problem) Can anyone assist? Thanks! Lior

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • Installer not being updated ( probably because of Windows 7 file cache )

    - by Sithu Kyaw
    I'm creating an installer for my Visual FoxPro application using ISTool and Inno Setup. It is ok for me for the first time. But, I updated my code and re-built the EXE file. Then, compiled the installer again. I found that my update was not compiled into the installer and I did not see the update in my running application. I noticed that the EXE file, which was built by VFP, was updated properly. It seems the installation script did not output the updated file. But, when I changed folder names, it did work. I don't want to change folder names whenever I run that installation script. It is not a good idea actually. I think it is because of Windows 7 cache system. Mine is Windows 7 Home Premium Service Pack 1. For example, My previous output file is located at C:\path\to\myinstaller.exe When I compile the installation script, the output file there should be overwritten, but it was not as expected. Although I deleted the file, it did not work. When I changed to output file path as C:\newpath\to\myinstaller.exe, I got the fix, but it is not a solution what I'm looking for. Does anyone how to do that? [Edit] I found that the installed directory was not updated properly. For example, I installed the program to C:\Program files\MyInstalledApp When I run the installer again, that installation directory should be overwritten, but failed. Thus, I got to uninstall the app before I re-install it. Is there any fix for this?

    Read the article

  • hMail server - sending copy of an e-mail changing the sender

    - by Beggycev
    Dear All please help me with following request. I am using hMail server in a company(test.com) and have several hundred of guest e-mail accounts ([email protected]). I need to accomplish this: When any of the guest e-mails receives a message(either from internal or external sender) this e-mail(or its copy) is sent to another address "[email protected]" which is the same for all of these guest e-mails. But I need the sender to be identified as the [email protected] not as the original sender which happens when I use forwarding. I tried to prepare a simple VBS script using the OnAcceptMessage event to accomplish this. and it works on my testing machine without internet connectivity but not in the production environment. To be specific, if I send an e-mail to [email protected] in my test env it is delivered to the [email protected] with [email protected] being a sender. But in the production env the e-mail stays in the guest mailbox with the original sender. I am interested in any solution, using a rule in hMail or script, anything is welcome. Thank you for any help! The script: Sub OnAcceptMessage(oClient, oMessage) 'creating application object in order to perform operations as hMail server administrator Dim obApp Set obApp = CreateObject("hMailServer.Application") Dim adminLogin Dim adminPassword 'Enter actual values for administrator account and password 'CHANGE HERE: adminLogin = "Admin_login" adminPassword = "password" Call obApp.Authenticate(adminLogin, adminPassword) Dim addrStart 'Take first 5 characters of recipients address addrStart = Mid(oMessage.To, 1, 5) 'if the recipient's address start with "guest" if addrStart = "guest" then Dim recipient Dim recipientAddress 'enter name of the recipient and respective e-mail address() 'CHANGE HERE: recipient = "FINAL" recipientAddress = "[email protected]" 'change the sender and sender e-mail address to the guest oMessage.FromAddress = oMessage.To oMessage.From = oMessage.To & "<" & oMessage.To & ">" 'delete recipients and enter a new one - the actual mps and its e-mail from the variables set above oMessage.ClearRecipients() oMessage.AddRecipient recipient, recipientAddress 'save the e-mail oMessage.save end if End Sub

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • Apache/Mongrel/Redmine installation problem (VirtualHost/ProxyPass)

    - by Riddler
    I am installing Redmine as per this step-by-step instruction: http://justnotes.co.cc/2010/02/11/how-to-install-redmine-on-ubuntu/ I am using Ubuntu 10.04.1, Apache 2.2.14, Mongrel 1.1.5. On the VirtualHost configuration stage, I am using this: <VirtualHost *:80> ServerName myserver.lv ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000 ProxyPreserveHost on <Proxy *> Order allow,deny Allow from all </Proxy> </VirtualHost> But, when I direct my browser to http://<my-server's-ip>/redmine/ what I see is not the redmine web application but "Index of /redmine" with, well, index of the files from the root directory of Redmine. Any idea how to fix that? P.S. Tried removing the VirtualHost stuff alltogether and instead adding the following simple clauses to apache2.conf: <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000/ ProxyPreserveHost on As a result, the behavior changes! Now http://<my-server's-ip>/redmine/ produces the source code of the Redmine's start page, so it is served, but apparently not rendered. At the same time, still, http://<my-server's-ip>:8000/ works perfectly fine, so Mongrel is serving the Redmine application as it should, it's just that something is wrong with my VirtualHost/proxying clauses in the .conf file.

    Read the article

  • Easyphp Web Setup

    - by Dominique
    I've tried to setup an EasyPHP in local and make it visible from the Web via DynDNS, which I've already successed many times before, but now this just doesn't work, maybe I've forgotten something... *The "server" is a common workstation. Here is what I have done : 1) Installed EasyPhp (with a index.php/html file in WWW folder) 2) Changed the port in the config to port 80 3) Forwarded port 80 to the server IP in my router configuration 4) Added the server to the router DMZ *Also tried removing antivirus/firewall I've installed PortListener, pointed it on port 80, and when I access "myname.dyndns.com" it says Client connected GET / HTTP/1.1 Host: xyz.dyndns-remote.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive So the server is accessible via Web, receive the connection successfully, but in my browser it says that the connection failed and show nothing...

    Read the article

  • Redhat 5.5: Multi-thread process only uses 1 CPU of the available 8

    - by Tonny
    Weird situation: Redhat Enterprise 5.5 (stock install, no updates, x64) on a HP z800 workstation. (Dual Xeon 2,2 Ghz. 8 cores, 16 if you count Hyper-threading. RH sees 16 cores.) We have an application that can utilize 1, 2 or 4 threads for heavy calculations. Somehow all these threads run on the same core at 100% load (the other 15 cores are nearly idle) so there is absolutely no benefit from the extra threads. In fact there is a slight slowdown as the threads get in each others way on the single core. How do I get them to run on separate cores (if possible)? Application is 64 bit. Can't change anything about the software except changing the threads setting. Is there some obscure Linux setting I can try to change? (I'm a True64 and Aix guy. I use Linux, but have no in depth knowledge of the process scheduling on Linux.)

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Running mod_php and suPHP same time

    - by BHare
    I recently went from Debian Lenny with 5.2.x and was able to use mod_php for any php files that were not located in /home/ and suPHP for all the php files that were located in /home/. I did this because I needed a default php.ini (given me all features of php) for my websites in /var/www/ and I didn't want to have to change the owner of all the .php files from root. I also had a default php.ini for all the /home/ php files without dangerous features. This was I had setup: <IfModule mod_suphp.c> <Directory /home/> AddType application/x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler application/x-httpd-php suPHP_Engine on suPHP_ConfigPath /home/shared/ </Directory> </IfModule> This was working perfect, but recently I upgraded to PHP to 5.3.5 from dotdeb (Lenny has no official php 5.3) . This had weird issues on lenny such as not display errors correctly and little tid bits. So I decided to upgrade from lenny to squeeze. Uninstalled php (along with it came suphp) and reinstalled with the new source. I now have 5.3.3-7 with Debian Squeeze but I cannot get mod_php and suPHP to run at the same time anymore. mod_php will always work and there are no errors in apache2 or suphp logs. If I disabled mod_php then suPHP will work. Is there thing I am doing wrong?

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • Create new folder for new sender name and move message into new folder

    - by Dave Jarvis
    Background I'd like to have Outlook 2010 automatically move e-mails into folders designated by the person's name. For example: Click Rules Click Manage Rules & Alerts Click New Rule Select "Move messages from someone to a folder" Click Next The following dialog is shown: Problem The next part usually looks as follows: Click people or public group Select the desired person Click specified Select the desired folder Question How would you automate those problematic manual tasks? Here's the logic for the new rule I'd like to create: Receive a new message. Extract the name of the sender. If it does not exist, create a new folder under Inbox Move the new message into the folder assigned to that person's name I think this will require a VBA macro. Related Links http://www.experts-exchange.com/Software/Office_Productivity/Groupware/Outlook/A_420-Extending-Outlook-Rules-via-Scripting.html http://msdn.microsoft.com/en-us/library/office/ee814735.aspx http://msdn.microsoft.com/en-us/library/office/ee814736.aspx http://stackoverflow.com/questions/11263483/how-do-i-trigger-a-macro-to-run-after-a-new-mail-is-received-in-outlook http://en.kioskea.net/faq/6174-outlook-a-macro-to-create-folders http://blogs.iis.net/robert_mcmurray/archive/2010/02/25/outlook-macros-part-1-moving-emails-into-personal-folders.aspx Update #1 The code might resemble something like: Public WithEvents myOlApp As Outlook.Application Sub Initialize_handler() Set myOlApp = CreateObject("Outlook.Application") End Sub Private Sub myOlApp_NewMail() Dim myInbox As Outlook.MAPIFolder Dim myItem As Outlook.MailItem Set myInbox = myOlApp.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox) Set mySenderName = myItem.SenderName On Error GoTo ErrorHandler Set myDestinationFolder = myInbox.Folders.Add(mySenderName, olFolderInbox) Set myItems = myInbox.Items Set myItem = myItems.Find("[SenderName] = " & mySenderName) myItem.Move myDestinationFolder ErrorHandler: Resume Next End Sub Update #2 Split the code as follows: Sent a test message and nothing happened. The instructions for actually triggering a message when a new message arrives are a little light on details (for example, no mention is made regarding ThisOutlookSession and how to use it). Thank you.

    Read the article

  • INFORMIX - listener thread err 25582

    - by Samuel Lao
    I´ve been digging different forums in the last 7 days looking for a possible solution.... Our database is based on informix running in a Linux server (LINUX SUSE 11). Suddenly, last saturday informix began to show an error message: listener-thread err=-25582 oserr=0, network connection is broken End users started to call reporting about slow network performance to this server, moments where the database application lost connection with server...so we proceeded doing a ping to the db server, getting good responses (1ms) without losing packets. I tried typing telnet (ipserver) 1526 which is informix's port for the application, it works. We had to disconnect the server and enable a backup db server which is located on another branch. It has been working in a regular way because the backup server hasn´t good specs (it is an old dell server model). So, I scanned the main server looking for viruses using Trend Micro Server Protect, it didn´t find anything (0 viruses and spywares). I revised the switches and routers, but I haven´t find anything strange... What else could be ? Thanks in advanced for your time and help with this issue.....I would really appreciate any advice...

    Read the article

  • IIS7 Compression CSS files only compressed when dynamic compression is enabled

    - by Paul
    If anyone can help it would be appreciated. I would like to enable compression for static files within IIS7 (for the sake of simplicity I'll just refer to static css files for the time being). The problem I'm getting is that css files are only compressed when both dynamic and static compression is enabled in IIS for the website. What I really want to achieve is css compression (static file) whilst leaving the dynamic (aspx) pages as uncompressed for the time being (to avoid unnecessary CPU load). I am puzzled as to why just leaving 'static compression' enabled causes css files to be returned uncompressed. My applicationHost.config file has not be altered and looks like this: <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> </httpCompression> The server-wide compression setting within IIS is set to 'Dynamic Disabled' and 'Static Enabled' from the Server Features Compression page. The web-site compression setting (Server Sites MyWebsite Features Compression) is where I am enabling and disabling dynamic compression as detailed above. Any help would be really help me get unstuck on this. Thanks

    Read the article

< Previous Page | 866 867 868 869 870 871 872 873 874 875 876 877  | Next Page >