Search Results

Search found 87571 results on 3503 pages for 'application fault'.

Page 333/3503 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • Different network response for indentical co-located machines

    - by Santosh
    We have a situation as follows: We have a two different virtual machines (VMs) on some remote server farm. The machines are identical in terms of hardware/software(OS) configurations. We have a J2EE application running on JBoss on each of those two machines. These two applications are of different version sav V1 on VM1 and V2 on VM2. We observed some degraded response time for application V2 when accessed via public URL. When we accessed the application through a secured VPN, there is hardly any difference. The bandwidth test (upload/download speed, ping etc) shows that VM1 is responding better when accessed via secured VPN. We concluded that the application does not seem to have performance issue. Because, it that's the case the performance degradation should also be there when access via VPN. So we concluded its the network problem. But since those two identical VMs are on same network we are looking for the reasons for different responses. My question is, given the above situation, what could be reasons for such a behavior ?

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • ADFS Relying Party

    - by user49607
    I'm trying to set up an Active Directory Federation Service Relying Party and I get the following error. I've tried modifying the page to allow <pages validateRequest="false"> to web.config and it doesn't make a difference. Can someone help me out? Server Error in '/test' Application. A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. To allow pages to override application request validation settings, set the requestValidationMode attribute in the httpRuntime configuration section to requestValidationMode="2.0". Example: <httpRuntime requestValidationMode="2.0" />. After setting this value, you can then disable request validation by setting validateRequest="false" in the Page directive or in the <pages> configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. For more information, see http://go.microsoft.com/fwlink/?LinkId=153133. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").] System.Web.HttpRequest.ValidateString(String value, String collectionKey, RequestValidationSource requestCollection) +11309476 System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, RequestValidationSource requestCollection) +82 System.Web.HttpRequest.get_Form() +186 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.IsSignInResponse(HttpRequest request) +26 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.CanReadSignInResponse(HttpRequest request, Boolean onPage) +145 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.OnAuthenticateRequest(Object sender, EventArgs args) +108 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +266 `

    Read the article

  • Active Directory + IIS + SQL + ASP.NET

    - by Amira Elsayed Ismail
    I have sent the following question to stackoverflow website I have installed Windows server 2008 r2 on a virtual machine, Can I install Active directory with domain controller + IIS + SQL server on the same machine? I want to make web application and this web application will authenticate users from Active Directory, the web application should be published on the server IIS and the users should access it remotely from their home using domain name of my machine, Someone tell me that its very wrong to have IIS and Active directory on the same machine I got the following Answer You can't use ActiveDirectory over the internet. At least not without something like a VPN as a middle man. Their home computers will not be joined to the domain, so there is no pass-through authentication. Yes, it's a bad idea to put AD on the web server. Why is too complex to get into in an answer here. Suffice it to say that even if you did do this, it's probably would not work the way you are thinking it should. It's not impossible to do this. For instance, many of the Microsoft "Small Businesss" products put IIS, AD, and SQL Server on the same server. But, you kind of have to know what you're doing to configure it securely. Then I add the following comment Thanks for ur reply.so what you think about the best way to do this as I didn't do anything like that before should I install active directory on a machine and IIS on another machine ? and what about SQL should I add it to the same server of active directory ? I didn't mentioned also that it will be Microsoft dynamics server that will access some information about work and i have to read data from axapta also ? also what is VPN and how can I use it to let users access my web application anywhere ? Sorry for my long questions and thanks in advance so please if anyone can help I will be thankful

    Read the article

  • IIS 7.5 Warning : Cannot verify access to the path

    - by Mostafa
    I'm newbie in IIS 7.5 , Before this I used to run ASP.NET Website under IIS 5 , That was too easy . I'm trying to run a very simple asp.net website ( just created a new website from VS 2010 targeted in .net 3.5) in IIS 7.5.7600 on windows 7 Ultimate 64 bit . While adding application , during Test Setting i receive one warning that says : The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that \$ has Read access to the physical path. Then test these settings again But I don't know how to make sure application pool identity has read access to the physical path ? I'm wondering if there is any step by step article or some thing that show me the walk-though for running a simple asp.net website on IIS 7.5? I appreciate any help .

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • Mystery 0xc0000142 error on starting java from a service, as a different user.

    - by cpf
    This is a very convoluted setup, but effectively this is what goes down: Manager service (which I don't have control over) running as admin user X starts my executable, which then starts Java as user Y using the standard c# StartInfo.Username/Password controls. Now, from a basic (not elevated or anything, just admin) command prompt I can run that executable, and Java pops up and works fine, running perfectly under the user it should be. When the service runs the same executable, however, Java silently fails. The only hint I see is this series of events in the event viewer: Service starts "Application popup: java.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. " (googling this reveals a lot of scam sites telling me to use their "free antivirus to fix 0xc0000142 errors easy!"... sigh) Service stops (the java shutdown propagated, which is supposed to happen) And here's what process explorer has for the failure: As you can see, everything shows as a success. Now, I think this might have something to do with the permissions (the user java.exe is running under has traverse permission for the entire drive and full permissions to Directory A, which is where the .jar is), but I just can't fathom how something that works fine from the command line (and, this is an upgrade, the previous system without the user-switching aspect works fine from the service) can fail with such a cryptic message and little showing up in logs.

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Rewrite a url on Nginx

    - by Ido B
    I tried to use this - location / { root /path.to.app/; index index.php index.html; rewrite ^/(.*)$ /check_register.php?key=$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path.to.app/$fastcgi_script_name; include fastcgi_params; } And its didn't work , This is my full config - user www-data www-data; worker_processes 4; events { worker_connections 3072; } http { include mime.types; default_type application/octet-stream; access_log off; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; gzip on; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } location / { root /path.to.app/; index index.php index.html; rewrite ^/(.*)$ /check_register.php?key=$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path.to.app/$fastcgi_script_name; include fastcgi_params; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } include /usr/local/nginx/sites-enabled/*; } How can i make it work?

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Reverse web proxy with time constraints

    - by user2893458
    I have a web application which produces several unique URLs of the type http://service.company.com/service.html?type=aaaa&key=jfiZm6u6cW where the last part is a randomly generated key. Each such URL provides access to an instance of the service provided. I am looking for a way to restrict access to those URLs based on time constraints, as an example URL#1 should be available between 8:00AM and 10:00AM on May 30, URL#2 should be available between 10:30AM and 12:00PM on May 31, and so on. I already have a resource scheduling application based on Drupal and would like to find a way to include those URLs as scheduled resources. The web application is deployed on Apache Tomcat, so I don't have the knowledge or the resources to alter it, therefore I thought that I could put some sort of reverse proxy in front of the web app that could implement the time constraint feature. In my thoughts the reverse proxy would allow or disallow access to each URL based on the rules that my scheduling application would provide. There may be other ways to deliver such a solution, but I can't think of anything better, so the question is: is there a reverse web proxy architecture that could allow access to the destination URLs based on time and date rules? Any other ideas are more than welcome.

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • Why my application ask for a codec to pla the MVI(.MOV) video files while i can play them on WMP and QuickTime?

    - by Daniel Lip
    I have an application i did some time ago when im loading the video file its ok when trying to play/use the file im getting the messageBox message say that its need a codec to use gspot or search the internet. Wehn im playing this files on my hard disk with Windows Media Play or either QuickTime there is no problems. The Video files for example name are: MVI_2483 in the file name properties i see its type: Quick Time Movie (.MOV) In my application im using DirectShowLib-2005.dll this is the class im using in my case to extract the video file im using it in my application to extract only lightnings from the video file name. In Form1 i have a button click event that just starting the action: private void button8_Click(object sender, EventArgs e) { viewToolStripMenuItem.Enabled = false; fileToolStripMenuItem.Enabled = false; button2.Enabled = false; label14.Visible = false; label15.Visible = false; label21.Visible = false; label22.Visible = false; label24.Visible = false; label25.Visible = false; ExtractAutomatic = true; DirectoryInfo info = new DirectoryInfo(_videoFile); string dirName = info.Name; automaticModeDirectory = dirName + "_Automatic"; subDirectoryName = _outputDir + "\\" + automaticModeDirectory; if (secondPass == true) { Start(true); } Start(false); } This is the function start in Form1: private void Start(bool secondpass) { setpicture(-1); if (Directory.Exists(_outputDir) && secondpass == false) { } else { Directory.CreateDirectory(_outputDir); } if (ExtractAutomatic == true) { string subDirectory_Automatic_Name = _outputDir + "\\" + automaticModeDirectory; Directory.CreateDirectory(subDirectory_Automatic_Name); f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Automatic_Name)); } else { string subDirectory_Manual_Name; if (Directory.Exists(subDirectoryName)) { subDirectory_Manual_Name = subDirectoryName; f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Manual_Name)); } else { subDirectory_Manual_Name = _outputDir + "\\" + averagesListTextFileDirectory + "_Manual"; Directory.CreateDirectory(subDirectory_Manual_Name); f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Manual_Name)); } } button1.Enabled = false; f.Secondpass = secondpass; f.FramesToSave = _fts; f.FrameCountAvailable += new WmvAdapter.FrameCountEventHandler(f_FrameCountAvailable); f.StatusChanged += new WmvAdapter.EventHandler(f_StatusChanged); f.ProgressChanged += new WmvAdapter.ProgressEventHandler(f_ProgressChanged); this.Text = "Processing Please Wait..."; label5.ForeColor = Color.Green; label5.Text = "Processing Please Wait"; button8.Enabled = false; button5.Enabled = false; label5.Visible = true; pictureBox1.Image = Lightnings_Extractor.Properties.Resources.Weather_Michmoret; Hrs = 0; //number of hours Min = 0; //number of Minutes Sec = 0; //number of Sec timeElapsed = 0; label10.Text = "00:00:00"; label11.Visible = false; label12.Visible = false; label9.Visible = false; label8.Visible = false; this.button1.Enabled = false; myTrackPanelss1.trackBar1.Enabled = false; this.checkBox2.Enabled = false; this.checkBox1.Enabled = false; numericUpDown1.Enabled = false; timer1.Start(); label2.Text = ""; label1.Visible = true; label2.Visible = true; label3.Visible = true; label4.Visible = true; f.Start(); } And this is the class wich is not my oqn class i just just defined it in some places wich making the problem: using System; using System.Diagnostics; using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Runtime.InteropServices; using DirectShowLib; using System.Collections.Generic; using Extracting_Frames; using System.Windows.Forms; namespace Polkan.DataSource { internal class WmvAdapter : ISampleGrabberCB, IDisposable { #region Fields_Properties_and_Events bool dis = false; int count = 0; const string fileName = @"d:\histogramValues.dat"; private IFilterGraph2 _filterGraph; private IMediaControl _mediaCtrl; private IMediaEvent _mediaEvent; private int _width; private int _height; private readonly string _outFolder; private int _frameId; //better use a custom EventHandler that passes the results of the action to the subscriber. public delegate void EventHandler(object sender, EventArgs e); public event EventHandler StatusChanged; public delegate void FrameCountEventHandler(object sender, FrameCountEventArgs e); public event FrameCountEventHandler FrameCountAvailable; public delegate void ProgressEventHandler(object sender, ProgressEventArgs e); public event ProgressEventHandler ProgressChanged; private IMediaSeeking _mSeek; private long _duration = 0; private long _avgFrameTime = 0; //just save the averages to a List (not to fs) public List<double> AveragesList { get; set; } public List<long> histogramValuesList; public bool Secondpass { get; set; } public List<int> FramesToSave { get; set; } #endregion #region Constructors and Destructors public WmvAdapter(string file, string outFolder) { _outFolder = outFolder; try { SetupGraph(file); } catch { Dispose(); MessageBox.Show("A codec is required to load this video file. Please use http://www.headbands.com/gspot/ or search the web for the correct codec"); } } ~WmvAdapter() { CloseInterfaces(); } #endregion public void Dispose() { CloseInterfaces(); } public void Start() { EstimateFrameCount(); int hr = _mediaCtrl.Run(); WaitUntilDone(); DsError.ThrowExceptionForHR(hr); } public void WaitUntilDone() { int hr; const int eAbort = unchecked((int)0x80004004); do { System.Windows.Forms.Application.DoEvents(); EventCode evCode; if (dis == true) { return; } hr = _mediaEvent.WaitForCompletion(100, out evCode); }while (hr == eAbort); DsError.ThrowExceptionForHR(hr); OnStatusChanged(); } //Edit: added events protected virtual void OnStatusChanged() { if (StatusChanged != null) StatusChanged(this, new EventArgs()); } protected virtual void OnFrameCountAvailable(long frameCount) { if (FrameCountAvailable != null) FrameCountAvailable(this, new FrameCountEventArgs() { FrameCount = frameCount }); } protected virtual void OnProgressChanged(int frameID) { if (ProgressChanged != null) ProgressChanged(this, new ProgressEventArgs() { FrameID = frameID }); } /// <summary> build the capture graph for grabber. </summary> private void SetupGraph(string file) { ISampleGrabber sampGrabber = null; IBaseFilter capFilter = null; IBaseFilter nullrenderer = null; _filterGraph = (IFilterGraph2)new FilterGraph(); _mediaCtrl = (IMediaControl)_filterGraph; _mediaEvent = (IMediaEvent)_filterGraph; _mSeek = (IMediaSeeking)_filterGraph; var mediaFilt = (IMediaFilter)_filterGraph; try { // Add the video source int hr = _filterGraph.AddSourceFilter(file, "Ds.NET FileFilter", out capFilter); DsError.ThrowExceptionForHR(hr); // Get the SampleGrabber interface sampGrabber = new SampleGrabber() as ISampleGrabber; var baseGrabFlt = sampGrabber as IBaseFilter; ConfigureSampleGrabber(sampGrabber); // Add the frame grabber to the graph hr = _filterGraph.AddFilter(baseGrabFlt, "Ds.NET Grabber"); DsError.ThrowExceptionForHR(hr); // --------------------------------- // Connect the file filter to the sample grabber // Hopefully this will be the video pin, we could check by reading it's mediatype IPin iPinOut = DsFindPin.ByDirection(capFilter, PinDirection.Output, 0); // Get the input pin from the sample grabber IPin iPinIn = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Input, 0); hr = _filterGraph.Connect(iPinOut, iPinIn); DsError.ThrowExceptionForHR(hr); // Add the null renderer to the graph nullrenderer = new NullRenderer() as IBaseFilter; hr = _filterGraph.AddFilter(nullrenderer, "Null renderer"); DsError.ThrowExceptionForHR(hr); // --------------------------------- // Connect the sample grabber to the null renderer iPinOut = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Output, 0); iPinIn = DsFindPin.ByDirection(nullrenderer, PinDirection.Input, 0); hr = _filterGraph.Connect(iPinOut, iPinIn); DsError.ThrowExceptionForHR(hr); // Turn off the clock. This causes the frames to be sent // thru the graph as fast as possible hr = mediaFilt.SetSyncSource(null); DsError.ThrowExceptionForHR(hr); // Read and cache the image sizes SaveSizeInfo(sampGrabber); //Edit: get the duration hr = _mSeek.GetDuration(out _duration); DsError.ThrowExceptionForHR(hr); } finally { if (capFilter != null) { Marshal.ReleaseComObject(capFilter); } if (sampGrabber != null) { Marshal.ReleaseComObject(sampGrabber); } if (nullrenderer != null) { Marshal.ReleaseComObject(nullrenderer); } GC.Collect(); } } private void EstimateFrameCount() { try { //1sec / averageFrameTime double fr = 10000000.0 / _avgFrameTime; double frameCount = fr * (_duration / 10000000.0); OnFrameCountAvailable((long)frameCount); } catch { } } public double framesCounts() { double fr = 10000000.0 / _avgFrameTime; double frameCount = fr * (_duration / 10000000.0); return frameCount; } private void SaveSizeInfo(ISampleGrabber sampGrabber) { // Get the media type from the SampleGrabber var media = new AMMediaType(); int hr = sampGrabber.GetConnectedMediaType(media); DsError.ThrowExceptionForHR(hr); if ((media.formatType != FormatType.VideoInfo) || (media.formatPtr == IntPtr.Zero)) { throw new NotSupportedException("Unknown Grabber Media Format"); } // Grab the size info var videoInfoHeader = (VideoInfoHeader)Marshal.PtrToStructure(media.formatPtr, typeof(VideoInfoHeader)); _width = videoInfoHeader.BmiHeader.Width; _height = videoInfoHeader.BmiHeader.Height; //Edit: get framerate _avgFrameTime = videoInfoHeader.AvgTimePerFrame; DsUtils.FreeAMMediaType(media); GC.Collect(); } private void ConfigureSampleGrabber(ISampleGrabber sampGrabber) { var media = new AMMediaType { majorType = MediaType.Video, subType = MediaSubType.RGB24, formatType = FormatType.VideoInfo }; int hr = sampGrabber.SetMediaType(media); DsError.ThrowExceptionForHR(hr); DsUtils.FreeAMMediaType(media); GC.Collect(); hr = sampGrabber.SetCallback(this, 1); DsError.ThrowExceptionForHR(hr); } private void CloseInterfaces() { try { if (_mediaCtrl != null) { _mediaCtrl.Stop(); _mediaCtrl = null; dis = true; } } catch (Exception ex) { Debug.WriteLine(ex); } if (_filterGraph != null) { Marshal.ReleaseComObject(_filterGraph); _filterGraph = null; } GC.Collect(); } int ISampleGrabberCB.SampleCB(double sampleTime, IMediaSample pSample) { Marshal.ReleaseComObject(pSample); return 0; } int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr pBuffer, int bufferLen) { if (Form1.ExtractAutomatic == true) { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { long[] HistogramValues = Form1.GetHistogram(bitmap); long t = Form1.GetTopLumAmount(HistogramValues, 1000); Form1.averagesTest.Add(t); } else { //this is the changed part if (_frameId > 0) { if (Form1.averagesTest[_frameId] / 1000.0 - Form1.averagesTest[_frameId - 1] / 1000.0 > 150.0) { count = 6; } if (count > 0) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); count --; } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } } else { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { //get avg double average = GetAveragePixelValue(bitmap); if (AveragesList == null) AveragesList = new List<double>(); //save avg AveragesList.Add(average); //***************************\\ // for (int i = 0; i < (int)framesCounts(); i++) // { // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); //***************************\\ //} } else { if (FramesToSave != null && FramesToSave.Contains(_frameId)) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); using (BinaryWriter binWriter = new BinaryWriter(File.Open(fileName, FileMode.Create))) { for (int i = 0; i < histogramValuesList.Count; i++) { binWriter.Write(histogramValuesList[(int)i]); } binWriter.Close(); } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } } return 0; } /* int ISampleGrabberCB.SampleCB(double sampleTime, IMediaSample pSample) { Marshal.ReleaseComObject(pSample); return 0; } int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr pBuffer, int bufferLen) { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { //get avg double average = GetAveragePixelValue(bitmap); if (AveragesList == null) AveragesList = new List<double>(); //save avg AveragesList.Add(average); //***************************\\ // for (int i = 0; i < (int)framesCounts(); i++) // { // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); long t = Form1.GetTopLumAmount(HistogramValues, 1000); //***************************\\ Form1.averagesTest.Add(t); // to add this list to a text file or binary file and read the averages from the file when its is Secondpass !!!!! //} } else { if (FramesToSave != null && FramesToSave.Contains(_frameId)) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); using (BinaryWriter binWriter = new BinaryWriter(File.Open(fileName, FileMode.Create))) { for (int i = 0; i < histogramValuesList.Count; i++) { binWriter.Write(histogramValuesList[(int)i]); } binWriter.Close(); } } for (int x = 1; x < Form1.averagesTest.Count; x++) { double fff = Form1.averagesTest[x] / 1000.0 - Form1.averagesTest[x - 1] / 1000.0; if (Form1.averagesTest[x] / 1000.0 - Form1.averagesTest[x - 1] / 1000.0 > 180.0) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); _frameId++; } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } return 0; }*/ private unsafe double GetAveragePixelValue(Bitmap bmp) { BitmapData bmData = null; try { bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int stride = bmData.Stride; IntPtr scan0 = bmData.Scan0; int w = bmData.Width; int h = bmData.Height; double sum = 0; long pixels = bmp.Width * bmp.Height; byte* p = (byte*)scan0.ToPointer(); for (int y = 0; y < h; y++) { p = (byte*)scan0.ToPointer(); p += y * stride; for (int x = 0; x < w; x++) { double i = ((double)p[0] + p[1] + p[2]) / 3.0; sum += i; p += 3; } //no offset incrementation needed when getting //the pointer at the start of each row } bmp.UnlockBits(bmData); double result = sum / (double)pixels; return result; } catch { try { bmp.UnlockBits(bmData); } catch { } } return -1; } } public class FrameCountEventArgs { public long FrameCount { get; set; } } public class ProgressEventArgs { public int FrameID { get; set; } } } I remember i had this codec problem/s before and i installed the codec/'s that were needed but in this case both quick time and windows media player can play the video files so why the application cant detect and find the codec/'s on my computer ? Gspot say that the codec is AVC1 but again wmp and quicktime play the video files no problems. The video files are from my digital camera !

    Read the article

  • Generating a twitter OAuth access key - the semi-manual way

    - by Piet
    [UPDATE] Apparently someone at Twitter was listening, or I’m going senile/blind. Let’s call it a combination of both. Instead of following all the steps below, you could just login with the Twitter account you want to use on http://dev.twitter.com, register your application and then click ‘Edit Details’ on the application overview page at http://dev.twitter.com/apps. Next click the ‘Application detail’ button on the right, followed by the ‘My Access Token’ button in order to get your Access Token and Access Token Secret. This makes the old post below rather obsolete. Clearly a case of me thinking everything is a nail and ruby is a hammer (don’t they usually say this about java coders?) [ORIGINAL POST] OAuth is great! OAuth allows your application to use your user’s data without the need to ask for their password. So Twitter made the API much safer for their and your users. Hurray! Free pizza for everyone! Unless of course you’re using the Twitter API for your own needs like running your own bot and don’t need access to other user’s data. In such cases a simple username/password combination is more than enough. I can understand however that the Twitter guys don’t really care that much about these exceptions(?). Most such uses for the API are probably rather spammy in nature. !!! If you have a twitter app that uses the API to access external user’s data: look for another solution. This solution is ONLY meant when you ONLY need access to your own account(s) through the API. Other Solutions Mr Dallas Devries posted a solution here which involves requesting and scraping a one-time PIN. But: I like to minimize the amount of calls I make to twitter’s API or pages to lessen my chances of meeting the fail whale. Also, as soon as the pin isn’t included in a div called ‘oauth_pin’ anymore, this will fail. However, mr Devries’ post was a starting point for my solution, so I’m much obliged to him posting his findings. Authenticating with the Twitter API: old vs new Acessing The Twitter API the old way: require ‘twitter’ httpauth = Twitter::HTTPAuth.new('my_account','my_secret_password') client = Twitter::Base.new(httpauth) client.update(‘Hurray!’) The OAuth way: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Hurray!’) In the above case, ve4whatafuzzksaMQKjoI is the ‘consumer key’ (sometimes also referred to as ‘consumer token’) and KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY is the ‘consumer secret’. You’ll get these from Twitter when you register your app. 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the ‘access token’ and fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the ‘access secret’. This combination gives the registered application access to your account. I’ll show you how to obtain these by following the steps below. (Basically you’ll need a bunch of keys and you’ll have to jump a bit through hoops to obtain them for your server/bot. ) How to get these keys 1. Surf to the twitter apps registration page go to http://dev.twitter.com/apps to register your app. Login with your twitter account. 2. Register your application Enter something for Application name, Description, website,… as I said: they make you jump through hoops. If you plan on using the API to post tweets, Your application name and website will be used in the ‘5 minutes ago via…’ line below your tweet. You could use the this to point to a page with info about your bot, or maybe it’s useful for SEO purposes. For application type I choose ‘browser’ and entered http://www.hadermann.be/callback as a ‘Callback URL’. This url returns a 404 error, which is ideal because after giving our account access to our ‘application’ (step 6), it will redirect to this url with an ‘oauth_token’ and ‘oauth_verifier’ in the url. We need to get these from the url. It doesn’t really matter what you enter here though, you could leave it blank because you need to explicitely specify it when generating a request token. You probably want read&write access so set this at ‘Default Access type’. 3. Get your consumer key and consumer secret On the next page, copy/paste your ‘consumer key’ and ‘consumer secret’. You’ll need these later on. You also need these as part of the authentication in your script later on: oauth = Twitter::OAuth.new([consumer key], [consumer secret]) 4. Obtain your request token run the following in IRB to obtain your ‘request token’ Replace my fake consumer key and consumer secret with the one you obtained in step 3. And use something else instead http://www.hadermann.be/callback: although this will only give a 404, you shouldn’t trust me. irb(main):001:0> require 'oauth' irb(main):002:0> c = OAuth::Consumer.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY', {:site => 'http://twitter.com'}) irb(main):003:0> request_token = c.get_request_token(:oauth_callback => 'http://www.hadermann.be/callback') irb(main):004:0> request_token.token => "UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1" This (UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1) is the request token: Copy/paste this token, you will need this next. 5. Authorize your application surf to https://api.twitter.com/oauth/authorize?oauth_token=[the above token], for example: https://api.twitter.com/oauth/authorize?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1 This will bring you to the ‘An application would like to connect to your account’- screen on Twitter where you can grant access to the app you just registered. If you aren’t still logged in, you need to login first. Click ‘Allow’. Unless you don’t trust yourself. 6. Get your oauth_verifier from the redirected url Your browser will be redirected to your callback url, with an oauth_token and oauth_verifier parameter appended. You’ll need the oauth_verifier. In my case the browser redirected to: http://www.hadermann.be/callback?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1&oauth_verifier=waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag Which returned a 404, giving me the chance to copy/paste my oauth_verifier: waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag 7. Request an access token Back to irb, use the oauth_verifier to request an access token, as follows: irb(main):005:0> at = request_token.get_access_token(:oauth_verifier => 'waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag') irb(main):006:0> at.params[:oauth_token] => "123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis" irb(main):007:0> at.params[:oauth_token_secret] => "fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh" We’re there! 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the access token. fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the access secret. Try it! Try the following to post an update: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Cowabunga!’) Now you can go to your twitter page and delete the tweet if you want to.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >