Search Results

Search found 15040 results on 602 pages for 'request servervariables'.

Page 339/602 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Sun Grid Engine: Automatically Terminating Idle Interactive Jobs

    - by dmcer
    We're considering using Sun Grid Engine on a small compute cluster. Right now, the current set up is pretty crude and just involves having people ssh to an open machine to run their jobs. We'd like to allow interactive jobs, since that should ease the transition from manually starting jobs to starting them using qsub. But, there is some concern that, if we do, people might accidentally leave their interactive sessions idle and block other jobs from being run on the machines. The issue isn't just theoretical, since we previously tried using OpenPBS and there was a problem with people opening up an interactive job in a screen session and essentially camping on a machine. Is there anyway to configure SGE to automatically kill idle interactive jobs? It looks like this was requested as an enhancement (Issue #:2447) way back in 2007. But, it doesn't seem like the request ever got implemented.

    Read the article

  • How should clients handle HTTP 401 with unknown authentication schemes?

    - by user113215
    What is the proper behavior for an HTTP client receiving a 401 Unauthorized response that specifies only unrecognized authentication schemes? My server supports Kerberos authentication using WWW-Authenticate: Negotiate. On the first request, the server sends a 401 Unauthorized response with a body containing an HTML document. The behavior that I expect is for clients that support Kerberos to perform that authentication and for other clients to simply display the HTML document (a login form). It seems that most of the "other clients" I've encountered do work this way, but a few do not. I haven't found anything that mandates any particular behavior in this situation. There's a brief mention in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication, but is there anything more concrete? It is possible that a server may want to require Digest as its authentication method, even if the server does not know that the client supports it. A client is encouraged to fail gracefully if the server specifies only authentication schemes it cannot handle.

    Read the article

  • Squid site redirection

    - by AndyM
    I have an internal website that cannot be accessed from some machines on my network, due to the physical location, VPN ,network ranges etc. I would like to install Squid on "in between" network to forward request from the clients that cannot reach the website. The issue is the clients have no ability to connect to www.example.com , but they can reach a network with a squid proxy , which in turn can reach www.example.com What is the correct term I need to research in squid , is it just caching www.example.com or do I need to set the clients to use a URL that gets rewritten ? i.e www.squid-example.com -- www.example.com

    Read the article

  • Cannot ping my domain-joined server - Can only ping domain controller - host unreachable

    - by Vazgen
    I have a HyperV Server hosting a Domain Controller VM (192.168.1.50) and another VM (192.168.1.51) joined to this domain. I have: domain controller as DNS server forward lookup zone for the domain with host record for 192.168.1.50 and 192.168.1.51 Windows client has primary DNS server set to 192.168.1.50 and secondary to my ISP I can ping 192.168.1.50 (domain controller) successfully but cannot ping 192.168.1.51 (domain-joined VM) When pinging from Windows client: ping 192.168.1.51 Reply from 192.168.1.129 : Destination host unreachable When pinging from Domain Controller: ping 192.168.1.51 Reply from 192.168.1.50 : Destination host unreachable I have 2 virtual network adapters one PRIVATE for intranet (set to static IP 192.168.1.51) and one PUBLIC for internet with a dynamic IP. I noticed the the PUBLIC one inherited the "mydomain.com" domain subtitle after joining the domain... I don't know what this meant but it seemed more intuitive to me to switch THIS ONE to have the static IP. After I configured that I still could not ping but now I get: ping 192.168.1.51 Request timed out What seems to be the issue, I'm relatively new to networking.

    Read the article

  • Apache proxying to Unicorn server times out, how to avoid?

    - by Ian
    I have a Teambox installation running on Unicorn, and the latter sometimes times out after 30 seconds. The idea of this configuration would be for Apache to wait until the Unicorn master server sends a timeout, because if I'm not wrong, Unicorn will quit the timed-out worker process but spawn a new one to handle the same request. Is there a way to configure Apache to not timeout like the nginx configuration of timeout = 0? Thanks for the help! EDIT I found a way, though it doesn't really work as I expected. In the ProxyPass directive you have to specify a retry=0 option after the url: ProxyPass / http://url/ retry=0 It doesn't work if the url is a ProxyBalancer though.

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • UI message passing programming paradigm

    - by Ronald Wildenberg
    I recently (about two months ago) read an article that explained some user interface paradigm that I can't remember the name of and I also can't find the article anymore. The paradigm allows for decoupling the user interface and backend through message passing (via some queueing implementation). So each user action results in a message being pased to the backend. The user interface is then updated to inform the user that his request is being processed. The assumption is that a user interface is stale by definition. When you read data from some store into memory, it is stale because another transaction may be updating the same data already. If you assume this, it makes no sense to try to represent the 'current' database state in the user interface (so the delay introduced by passing messages to a backend doesn't matter). If I remember correctly, the article also mentioned a read-optimized data store for rendering the user interface. The article assumed a high-traffic web application. A primary reason for using a message queue communicating with the backend is performance: returning control to the user as soon as possible. Updating backend stores is handled by another process and eventually these changes also become visible to the user. I hope I have explained accurately enough what I'm looking for. If someone can provide some pointers to what I'm looking for, thanks very much in advance.

    Read the article

  • Introducing the Oracle Parcel Service&ndash;Example/Reference Application

    - by Jeffrey West
    Over the last few weeks the product management team has been working on a webcast series that is airing in EMEA.  It is a 5-episode series where we talk about different features of WebLogic and show how to build applications that take advantage of these features.  Each session is focused at a different layer of the technology stack, and you can find the schedule below. The application we are building in this series is named the ‘Oracle Parcel Service’.  It is an example application and not a product of Oracle by any stretch of the imagination.  Over the next few weeks we will be finalizing the code and will be releasing it for you to check out.  For updates, request membership to the Oracle Parcel Service project on SampleCode.oracle.com: https://www.samplecode.oracle.com/sf/projects/oracle-parcel-svc/. Here are some of the key features that we are highlighting: JPA 2.0 (new in WebLogic 10.3.4) with EclipseLink Coherence TopLink Grid Level 2 cache for JPA JAX-RS (new in WebLogic 10.3.4) 1.0 for RESTful services Lightweight JQuery Web UI for consuming RESTful services JSF 2.0 (new in WebLogic 10.3.4) utilizing PrimeFaces EJB 3.0 Spring-WS Web Services JAX-WS Web Services Spring MDP’s for Event Driven Architectures Java MDB’s for Event Driven Architectures Partitioned Distributed Topics for Event Driven Architectures   Accessing the Code on SampleCode.Oracle.com You will need to log in using your Oracle.com username and password.  If you have not created an account, you will need to do so.  It’s a simple one-page form and we don’t bother you with too many emails.   Please join the project to be kept up to date on changes to the code and new projects.  Joining the project is not required, but very much appreciated. Once you have signed in you should see an icon for accessing the Source Code via Subversion.  You can also download a zip file containing the code.

    Read the article

  • How to resolve IPs in DNS based on the subnet of the requesting client?

    - by Nohsib
    Is it possible to configure Bind9 or other DNS to resolve the domain name of a machine into different IPs based on the subnet of the requesting client? e.g. Say the same service is running on 2 different application servers at different geographical points and based on the incoming request to resolve the domain name, the name server provides the IP of the application server based on the requesting client's IP, so the service could be offered by servers that are geographically closer to the client. In short, something like a CDN but just the IP resolution part based on the client's subnet. Is this configurable in any DNS?

    Read the article

  • Should I include HTML markup in my JSON response?

    - by Mike M. Lin
    In an e-commerce site, when adding an item to a cart, I'd like to show a popup window with the options you can choose. Imagine you're ordering an iPod Shuffle and now you have to choose the color and text to engrave. I'd like the window to be modal, so I'm using a lightbox populated by an Ajax call. Now I have two options: Option 1: Send only the data, and generate the HTML markup using JavaScript What's nice about this is that it trims down the Ajax request to the bear minimum and doesn't mix the data with the markup. What's not so great about this is that now I need to use JavaScript to do my rendering, instead of having a template engine on the server-side do it. I might be able to clean up the approach a bit by using a client-side templating solution. Option 2: Send the HTML markup What's good about this is that I can have the same server-side templating engine I'm using for the rest of my rendering tasks (Django), do the rendering of the lightbox. JavaScript is only used to insert the HTML fragment into the page. So it clearly leaves the rendering to the rendering engine. Makes sense to me. But I don't feel comfortable mixing data and markup in an Ajax call for some reason. I'm not sure what makes me feel uneasy about it. I mean, it's the same way every web page is served up -- data plus markup -- right?

    Read the article

  • Duplicity not writing to a pre-existing S3 bucket

    - by Saurabh Nanda
    I'm trying to backup a directory to a pre-existing Amazon S3 bucket using the following command: duplicity --no-encryption system/ s3+http://MY_BUCKET_NAME/backup However, I'm getting the following error consistently: S3CreateError: S3CreateError: 409 Conflict <?xml version="1.0" encoding="UTF-8"?> <Error><Code>BucketAlreadyOwnedByYou</Code><Message>Your previous request to create the named bucket succeeded and you already own it.</Message><BucketName>vacationlabs</BucketName><RequestId>3C1B8C49469E3374</RequestId><HostId>4dU1TKf3Td6R0yvG9MaLKCYvQfwaCpdM8FUcv53aIOh0LeJ6wtVHHduPSTqjDwt0</HostId></Error> The S3 bucket is empty and does NOT have the backup directory The bucket is in Singapore region

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • Internet cafe software for linux

    - by pehrs
    I have gotten a request to roll out a total of 8 internet cafe's in a large network. Budget is non-existent as it will all be done for a non-profit. I was planing to use Ubuntu and live-cds to minimize the amount of management required, but I can't seem to find any suitable internet cafe system that is Ubuntu based. The requirements are pretty basic: It needs to keep track of logged in time and log out users when their time it up. No billing will be done, it will just be used to ensure people can share the computers fairly. It should be possible to force logout from a central system. Users will be unskilled, so it has to have a GUI. What (preferably free, considering the shoe-string budget) software would you suggest to manage this?

    Read the article

  • Ubuntu eats itself after I followed updater instruction

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I (and this is from noob memory) sudo apt-get install -f. I did this and typed in the confirmation phrase and watched apt-get systematically remove every component of linux, both the stuff I installed and the core ubuntu packages. I could only assume at this stage that this was for a fresh install but of course, I know better now. There's much complaint about Windows, but I've never met with advice from Microsoft tools to wipe out the operating system because of a couple of missing .dlls. So what gives? This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. I'm tempted, but I have hopes of recovering the data that I had enough misguided faith to trust to the linux ext4 partition. I've tried pen driving back into it with a 32 bit iso but I'm simply informed that ubuntu is running in low graphics mode and get to watch the dots cycle indefinitely. EDIT: Thanks for the advice vis positive request. I've got onto the machine with a 64 bit stick and can see the file structure left behind by the installation. My first instinct was to run install from the stick but it did not seem to offer a recovery option. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings. I'm particularly worried about losing email from evolution - the rest I could probably lash back together. I would also be interested how this disaster came about. I see people in the know recommending this same procedure in similar circumstances. Thanks for your attention, Tony Martin

    Read the article

  • IIS 7 URL rewrite rule

    - by Andrew
    Hello, guys! We have here to web servers behind a router - one IIS and one Tomcat (on different machines / IP addresses). The domain is pointing to out external IP, which is forwarded to IIS (internal IP 192.168.1.10 for example). I'm trying to do the following: when [www.]ourdomain.com is entered the default web site on IIS have to be loaded (this part is ok), but when test.ourdomain.com is entered I want to redirect this request to another web server (192.168.1.11 for example). I created a site "test" on IIS and it is displayed when test.ourdomain.com is entered. Then I tried to redirect it with following rule: Requested URL matches the pattern: * (using wildcards) Condition: {HTTP_HOST} matches test.ourdomain.com Action type: Rewrite Rewrite URL: http://192.168.1.11/{R:0} but when I try to load test.ourdomain.com now I get IIS's error 404 page. Obviously I'm wrong :-) How can I do such a redirect?

    Read the article

  • Boot from VHD with windows7 - bcdedit trouble

    - by Michiel Overeem
    I'm running Windows7 Enterprise, x64 version. I've created a windows7 vhd file with help of the following blog post hanselman blog After that, I've added it to my boot menu with help of another blog post hanselman blog This worked great. After that, i've upgraded my hdd. With help of clonezilla i've copied the old disk to the new disk. Next step was to copy the vhd to another partition. Then i updated the boot menu. However, the step C:\>bcdedit /set {guid} device vhd=[driveletter:]\<directory>\<vhd filename> fails with the message An error has occurred setting the element data. The request is not supported. what is happening?

    Read the article

  • FreeBSD 8.2 + Apache 2.2 + mod_auth_pam2: unable to authenticate

    - by zneak
    I've installed Apache 2.2 and mod_auth_pam2 from ports, but I can't get local UNIX authentication to work. When I access the protected part of my local website, I do get the authentication request, and with pam_permit.so, it works. However, when I change pam_permit.so to the real thing, pam_unix.so, I get this message in httpd-error.log: [error] PAM: user 'foo' - not authenticated: authentication error This is the relevant part of my Apache config, though I don't think it's the problem as it works with pam_permit.so: <Location /foo> AuthBasicAuthoritative Off AuthPAM_Enabled on AuthPAM_FallThrough off AuthType Basic AuthName "Secret place" Require valid-user </Location> This is my /etc/pam.d/httpd, though I don't think it's the problem either, since it works with pam_permit.so: auth required pam_unix.so account required pam_unix.so So what am I missing? What does it take to have pam_unix.so work for httpd under FreeBSD?

    Read the article

  • Issues importing PST into Archive Mailbox Exchange 2013

    - by atomicharri
    I've completed a successful mailbox import request into the archive mailbox for a particular user. There are no errors to speak of and the size of the archive mailbox has grown to the expected size after import (approximately 6GB). However, in OWA I can't see any of the mail folders inside the archive mailbox, only Deleted Items and RSS feeds folders. However if I run some cmdlets to list the contents of the Archive Mailbox\Inbox folder in the Exchange Shell, the full list of subfolders will come up. If I do a mail item search on the archive mailbox, emails from those individual folders appear as search results! I just can't see the folders in the navigation pane and hence cannot browse any of my old emails. Any help would be appreciated.

    Read the article

  • dhcpd: varying vendor-class-identifier

    - by jessicah
    I'm having trouble selectively sending parameters in response to a DHCP Inform packet using groups (or even without, just using host declarations) for bootp stuff. My configuration file right now looks like: subnet 130.123.131.128 netmask 255.255.255.128 { allow unknown-clients; } host dev-mac-09 { option vendor-class-identifier "example-identifier"; hardware ethernet 10:9a:dd:51:ff:83; } If I put vendor-class-identifier in the global scope, using tcpdump I can see that the client receives the vendor class option successfully. If I take it out, and just keep it in the host scope (or group scope), the client never receives the option. Specifying option dhcp-parameter-request list 60 doesn't help either. I did try using a class definition inside a group, but then it applied even if the host wasn't a part of the group. As an aside, how do I get detailed logging? At least something to indicate what groups and things got used to generate the response to the client.

    Read the article

  • Router stopping my python server

    - by drfrev
    This was originally posted in stackoverflow.com but it was suggested I move it here after it was realized it wasn't my code that was wrong. So my problem, very simply, is that I cannot get my computers that are connected to my router to communicate. example: If I ping a wireless computer I get no responce and the Request times out If I ping a computer wired to the modem directly it works fine. When I ping I use the local ip for each case. *if it helps my original post is here: http://stackoverflow.com/questions/12593024/python-cannot-go-over-internet-network/12593361#12593361 And some screen shots of different things are here: http://imgur.com/a/jUZ4G#3 thank you, any help is greatly appreciated. NOTE I am heading off to bed now, so I will respond around 6:00 AM EST if anyone posts some help

    Read the article

  • Forwarding a subdomain to main domain using Godaddy.

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • Cannot read/access Apache2 access logs

    - by webworm
    I have been asked to take a look at some access logs for an Apcahe2 web server running on Ubuntu. I have been told by the administrator of the machine that my login has "admin" access yet I cannot seem to copy the access logs from Apache2 to my local machine via FTP for analysis. I figure one of two things is happening ... I don't really have full admin access Some other process (perhaps Apache2) has control of the log files and won't let me copy them. How can I tell if I truly have admin access? What type of access do I need to request? Root access? Something else? Should I be able to copy these log files with admin access?

    Read the article

  • Nintex workflow tips and tricks

    - by ybbest
    Here are some Nintex 2010 workflow related tips and tricks and I will keep updating them. 1. How to add a link in email using Nintex. a. Go to the insert tab and select Link b. Select the url you’d like to set for the link c. After you have done this , you will see the Link is inserted into the email. 2. How to make the Flexi task reject option called “Decline” and make the comments mandatory. a. Open the  Flexi task action config prompt as shown below b.Click on the edit icon and change the settings from TO 3. When saving or publishing Nintex workflow and receiving the following errors: Server was unable to process request. —> The file hxtp://../NintexWorkflows/Workflowname/Workflowname.xoml is checked out for editing by Domain/Username. To Fix it , you can perform the following steps: a.In the publish dialogue, uncheck “Overwrite existing version” and rename the workflow. b.Delete the old workflow which was checked out c.Publish the new workflow again with the old name d.Delete the “temporary” workflow again

    Read the article

  • Varnish Running VCC-compiler failed on purge

    - by FLX
    I've been following this guide which uses this default.vcl. However, when starting Varnish I get the following error: * Starting HTTP accelerator [fail] storage_malloc: max size 1024 MB. Message from VCC-compiler: Expected '(' got ';' (program line 341), at (input Line 43 Pos 22) purge; ---------------------# Running VCC-compiler failed, exit 1 VCL compilation failed Which means that there is something wrong with purge here: sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } } I don't see anything wrong, can someone explain? Thanks!

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >