Search Results

Search found 64033 results on 2562 pages for 'andrew siemer www andrewsiemer com'.

Page 221/2562 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Is it possible to set a VPN through Tor? and have functionality for Software Updates?

    - by moazhmi
    Gday, I have been trying to do a software update on Trusty both on terminal and software update app but that hasnt been successful, apparently my ISP has a broken transparent proxy. moazhmi@moazhmi-K52F:~$ wget -S http://extras.ubuntu.com/ubuntu/dists/trusty/InRelease --2014-06-04 17:34:52-- http://extras.ubuntu.com/ubuntu/dists/trusty/InRelease Resolving extras.ubuntu.com (extras.ubuntu.com)... 91.189.92.152 Connecting to extras.ubuntu.com (extras.ubuntu.com)|91.189.92.152|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 Content-Length: 213 Date: Wed, 04 Jun 2014 14:34:52 GMT Server: Apache/2.2.22 (Ubuntu) Connection: Keep-Alive Vary: Accept-Encoding Length: 213 [text/html] Saving to: ‘InRelease’ 100%[======================================>] 213 --.-K/s in 0s 2014-06-04 17:34:52 (22.1 MB/s) - ‘InRelease’ saved [213/213] I was advised to refer back to the ISP with a complaint but that has returned with peanuts. My Question is - Is it possible to set a VPN through TOR , i.e whereas I can run software updates through terminals or the app . the browsing is out of question here as I need to update which I am not able to do / only on 12.04.

    Read the article

  • Basic Google Analytics Click Tracking and/or Overview

    - by Alan Storm
    This is a really basic Google Analytics question. Apologies in advance if it's not appropriate here, but I've had a lot of luck on Stack Overflow and this seems like the best Stack Exchange site for a question like this. I'm trying to understand how Google Analytics goals work, or if they're the right feature to be using for my situation. Most of the documentation I find online refers to the old version of the UI, not the new one. I have a website, let's call is blog.example.com. This website drives traffic to an ecommerce store, let's call that store.example2.com. I want to get reports on which links from blog.example.com are being clicked through leading to store.example2.com. How do you do this in Google analytics? Are goals the right area to be looking? Do I setup the goals on store.example2.com or blog.example.com? Or both? Is there any canonical user guide (free or paid) that covers how this works? I'm a competent programmer, but it's years since I dealt with conversion tracking on any serious level, and we've progressed well beyond my frozen caveman pixel tracking knowledge. Thanks in advance

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • postfix-dovecot email sending works with squirrel mail but not with Thunderbird?

    - by Mark S.
    I have setup an intranet email system using postfix, dovecot and squirrel mail, Which is working fine, I can send and receive mail to all users on the system. I presume that the issue is in the postfix configuration, because when I configure Thunderbird to send mail I am getting the following error: An error occurred while sending mail. The mail server responded: 4.1.8 <[email protected]>: Sender address rejected: Domain not found. Please check the message recipient support@intranetdomain.com and try again. Also here is the relevant syslog entries: NOQUEUE: reject: RCPT from host1.intranetdomain.com [More Information] [192.168.11.1 [More Information] ]: 450 4.1.8 <[email protected]>: Sender address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[127.0.0.1 [More Information] ]> I have configured MX records on the DNS server and they respond appropriately when I query them for those MX records, so I do not think that is the issue. I think that my issue is caused by the default configuration of: smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sender_restrictions = reject_unknown_sender_domain Since this is on an internal network and it will not be exposed to the internet as a whole which options can I remove safely?

    Read the article

  • Tracking subdomains in the same profile as the main domain

    - by Osvaldo
    I have a site, let's call it http://www.example.com with a non-universal Google analytics account. Now we have to add new functionalities in a subdomain like https://subdomain.example.com as a micro site. On that subdomain the URL's will be something like https://subdomain.example.com?param1=foo&param2=bar We can't change the requirements as both main site and mini-site use a different CMS/application. This is strictly a Google Analytics question. But we need to count pageviews and events that happen in that subdomain (with URLs like https://subdomain.example.com?param1=foo&param2=bar) as belonging to the main domain. So pageviews and events in https://subdomain.example.com?param1=foo&param2=bar need to be recorded as if they happened in http://www.example.com/path/to/whatever/I/want Fortunately we have full control on JavaScript in the main domain site and in the subdomain site too. How can we make this work? Do we need to change tracking code both in the main domain and subdomains? Do we need to reconfigure Google Analytics? Please note again that we do not want to create a new view for the subdomain. Both mini-site and main site should be in the same account, property and view.

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • How do I fix the HDMI/DVI display output with Intel HD 4000 Graphics in 12.04?

    - by YumYumYum
    I have an Alienware Dell PC with Intel HD 4000 Graphics (Ivy Bridge) as verified by the output of lspci | grep VGA posted below. 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) The PC only has HDMI and DVI display outputs and using the HDMI output I am only being offered abnormal resolutions. As you can see below it does not even list HDMI1 or DVI1 but just only a fallback. $ export DISPLAY=:0.0 && xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1360 x 768, maximum 1360 x 768 default connected 1360x768+0+0 0mm x 0mm 1360x768 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 How can I fix this? Does it just need to be configured differently or will I need to use a newer kernel (as Intel Graphics drivers are included in the kernel)? Follow up: kernel to latest Step 1: Go to: http://kernel.ubuntu.com/~kernel-ppa/mainline/ Go to last: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/ Download: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3_3.6.0-030600rc3.201208221735_all.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-extra-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb Step 2: sudo dpkg -i linux*.deb Step 3: reboot which shows that i have Ubuntu 12.04 with latest $ uname -a Linux sun-Alienware-X51 3.6.0-030600rc3-generic #201208221735 SMP Wed Aug 22 21:36:32 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux But still same problem remain.

    Read the article

  • "Can't open display" even after access with xhost

    - by Yann
    I'm trying to run a graphical program remotely, without using ssh. I've set the display variable on the server (let's say server.com, Linux, not ubuntu, and no su rights) to point to my workstation (workstation.com, ubuntu 10.04) setenv DISPLAY workstation.com:0 Then on my workstation I've tried both xhost +server.com and xhost + Then I ssh into the server (to test things): ssh username@server.com and try to run xclock, and get the following error: Error: Can't open display: workstation.com:0 I've looked at /etc/ssh/ssh_config on the workstation and I should be forwarding correctly: X11Forwarding yes. How do I go about troubleshooting this? What logs on the workstation document these failed attempts? To explain why I'm doing this: I want to run a batch job on a server to debug an MPI-based parallel program. I want to run xterm as the batch job executable, per the instructions provided by the system admins. This setup use to work. I reinstalled things on my workstation and since then I frequently get one-time message along the lines The authenticity of host 'hostname (XXX.XXX.XXX.XX)' can't be established. My attempt to fix the above was to move my ~/.ssh/known_hosts file to a back up on both server and host, and then to ssh from each to the other with the flag -o StrictHostKeyChecking=no. I no longer get that message, but I was wondering does this play a part in why X11 forwarding is not working?

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • Not able to track traffic on subdomain using Google Analytics

    - by Steven
    I'm trying to track traffic for my sub-domain, but it's not happening. This is how it's set up. My partner has a domain called sub1.partner.com. This domain points to partner1.mydomain.com. The idea is that users think they are browsing my partners website, when they are in fact browsing pages on my server. My tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_setDomainName', '.mysite.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); In Google analytics I've created a new account under my main account and called in partner1.mysite.com. On this account I have created a filter: Filter type: include Filter field: Host name Filter pattern: partner1.mysite.no Case sensetive: No What more can I try to track traffic on my subdomain? UPDATE Question 1 Is this line correct? _gaq.push(['_setDomainName', '.mysite.com']); Question 2 Is it correct that I have to add \ before any punctuations like so \. in filters?

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • axis2 web service behave differently when tested with web service client or with local test class

    - by Stefano
    Hello I need to update a facade to some web service proxy classes to a third party web service, and expose them as a service. This for two reason : to maintain the same interface for all application that need to use the system : actually its migrating and there are a few differences in the third party ws (method names); and to expose a simplified interface. The third party has provided me with a manual and some pregenerated proxy classes to their service (the java file says generated with axis2 1,4) . I've used netbeans 6.8 and the axis2 plugin to create an axis2 service . This service contains the proxy classes and the facade class which instantiate the web service proxy and calls its method; the facade class is exposed as service. I've used axis2 1.4 (at beginnig and later 1.5 ) and tomcat 6.0. The first test i did was to call the facede methods from inside the project itself and it worked. Then i've created a new project with a jax-ws web service client to call my class deployed on axis2. At this point has happened two strange thing : In the axis2 services page has appeared the third party proxy class as if it were a new service (if i try to get the wsdl axis raises an error ). eg. the proxy interface is named WebServiceAPI (_stub is the concrete class) and , after the first call to my service , i find a new "WebServicesAPI1272968932531_1" service inside axis . The call obvoiusly fail i've began to sniff soap messages with wireshark and i've found they differs when using proxy classes direclty from my facade test class by the messages created after being deployed on axis. i've noticed they differs for the presence of the soap header in the failing message. any help would be greatly appreciated : maybe i messed up something, there might be some incompatibilities or version mistakes? below i've added the signature of the third party proxy, its impementation and the different soap messages: /* * WebServicesAPI.java * This file was auto-generated from WSDL * by the Apache Axis2 version: 1.4 Built on : Apr 26, 2008 (06:24:30 EDT) */ package com.ibm.eci.wsapi; public interface WebServicesAPI { public com.ibm.eci.wsapi.ArrayOfstring getWorkItemHistory( java.lang.String stateKey,java.lang.String logonID,com.ibm.eci.wsapi.RepoItemHandle workItemHandle) throws java.rmi.RemoteException,com.ibm.eci.wsapi.ExceptionException0; ...etc the concrete class is : /** * WebServicesAPIStub.java * * This file was auto-generated from WSDL * by the Apache Axis2 version: 1.4 Built on : Apr 26, 2008 (06:24:30 EDT) */ package com.ibm.eci.wsapi; /* * WebServicesAPIStub java implementation */ public class WebServicesAPIStub extends org.apache.axis2.client.Stub implements WebServicesAPI{ protected org.apache.axis2.description.AxisOperation[] _operations; ... public com.ibm.eci.wsapi.ArrayOfstring getWorkItemHistory( java.lang.String stateKey297,java.lang.String logonID298,com.ibm.eci.wsapi.RepoItemHandle workItemHandle299) throws java.rmi.RemoteException ,com.ibm.eci.wsapi.ExceptionException0{ org.apache.axis2.context.MessageContext _messageContext = null; try{ org.apache.axis2.client.OperationClient _operationClient = _serviceClient.createClient(_operations[0].getName()); _operationClient.getOptions().setAction("\"\""); _operationClient.getOptions().setExceptionToBeThrownOnSOAPFault(true); addPropertyToOperationClient(_operationClient,org.apache.axis2.description.WSDL2Constants.ATTR_WHTTP_QUERY_PARAMETER_SEPARATOR,"&"); // create a message context _messageContext = new org.apache.axis2.context.MessageContext(); // create SOAP envelope with that payload org.apache.axiom.soap.SOAPEnvelope env = null; com.ibm.eci.wsapi.GetWorkItemHistoryE dummyWrappedType = null; env = toEnvelope(getFactory(_operationClient.getOptions().getSoapVersionURI()), stateKey297, logonID298, workItemHandle299, dummyWrappedType, optimizeContent(new javax.xml.namespace.QName("http://wsapi.eci.ibm.com", "getWorkItemHistory"))); //adding SOAP soap_headers _serviceClient.addHeadersToEnvelope(env); // set the message context with that soap envelope _messageContext.setEnvelope(env); // add the message contxt to the operation client _operationClient.addMessageContext(_messageContext); //execute the operation client _operationClient.execute(true); org.apache.axis2.context.MessageContext _returnMessageContext = _operationClient.getMessageContext( org.apache.axis2.wsdl.WSDLConstants.MESSAGE_LABEL_IN_VALUE); org.apache.axiom.soap.SOAPEnvelope _returnEnv = _returnMessageContext.getEnvelope(); java.lang.Object object = fromOM( _returnEnv.getBody().getFirstElement() , com.ibm.eci.wsapi.GetWorkItemHistoryResponseE.class, getEnvelopeNamespaces(_returnEnv)); return getGetWorkItemHistoryResponse_return((com.ibm.eci.wsapi.GetWorkItemHistoryResponseE)object); ... the failing soap message (generated by jax-ws client to the axis deployed service) is : POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:To>http://n0611049:9083/vbr_wsapi/services/WebServicesAPI.Endpoint</wsa:To> <wsa:MessageID>urn:uuid:A31AD99897F9045E981272964443982</wsa:MessageID><wsa:Action>""</wsa:Action> </soapenv:Header> <soapenv:Body> <ns1:initializeProps xmlns:ns1="http://wsapi.eci.ibm.com"> <props><val>client.locale=it_IT</val> </props> </ns1:initializeProps> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 500 Internal Server Error Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Connection: Close Date: Tue, 04 May 2010 09:16:15 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:Action>http://www.w3.org/2005/08/addressing/fault</wsa:Action> <wsa:RelatesTo>urn:uuid:A31AD99897F9045E981272964443982</wsa:RelatesTo> <wsa:FaultDetail> <wsa:ProblemAction> <wsa:Action>""</wsa:Action> </wsa:ProblemAction> </wsa:FaultDetail> </soapenv:Header> <soapenv:Body> <soapenv:Fault xmlns:wsa="http://www.w3.org/2005/08/addressing"> <faultcode>wsa:ActionNotSupported</faultcode> <faultstring>The [action] cannot be processed at the receiver.</faultstring> <detail /> </soapenv:Fault> </soapenv:Body> </soapenv:Envelope> the succesful call (generated by my local test class, not being deployed to axis yet) : POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns1:initializeProps xmlns:ns1="http://wsapi.eci.ibm.com"> <props> <val>client.locale=it_IT</val> </props> </ns1:initializeProps> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Date: Tue, 04 May 2010 09:40:03 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <dlwmin:initializePropsResponse xmlns:dlwmin="http://wsapi.eci.ibm.com"> <return>e0e40cc51ceb0adf96c582bb6e047b3d0f</return> </dlwmin:initializePropsResponse> </soapenv:Body> </soapenv:Envelope> POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns1:logon xmlns:ns1="http://wsapi.eci.ibm.com"> <stateKey>e0e40cc51ceb0adf96c582bb6e047b3d0f</stateKey> <systemID>----</systemID> <authBundle> <password>-----</password> <sealed>false</sealed> <username>---</username> </authBundle> </ns1:logon> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Date: Tue, 04 May 2010 09:40:21 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <dlwmin:logonResponse xmlns:dlwmin="http://wsapi.eci.ibm.com"> <return>e0e40cc51ceb0adf96c582bb6e047b3d10</return> </dlwmin:logonResponse> </soapenv:Body> </soapenv:Envelope> ... goes on with other calls

    Read the article

  • TwoWay Binding With ItemsControl

    - by Andrew
    I'm trying to write a user control that has an ItemsControl, the ItemsTemplate of which contains a TextBox that will allow for TwoWay binding. However, I must be making a mistake somewhere in my code, because the binding only appears to work as if Mode=OneWay. This is a pretty simplified excerpt from my project, but it still contains the problem: <UserControl x:Class="ItemsControlTest.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="300"> <Grid> <StackPanel> <ItemsControl ItemsSource="{Binding Path=.}" x:Name="myItemsControl"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBox Text="{Binding Mode=TwoWay, UpdateSourceTrigger=LostFocus, Path=.}" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <Button Click="Button_Click" Content="Click Here To Change Focus From ItemsControl" /> </StackPanel> </Grid> </UserControl> Here's the code behind for the above control: using System; using System.Windows; using System.Windows.Controls; using System.Collections.ObjectModel; namespace ItemsControlTest { /// <summary> /// Interaction logic for UserControl1.xaml /// </summary> public partial class UserControl1 : UserControl { public ObservableCollection<string> MyCollection { get { return (ObservableCollection<string>)GetValue(MyCollectionProperty); } set { SetValue(MyCollectionProperty, value); } } // Using a DependencyProperty as the backing store for MyCollection. This enables animation, styling, binding, etc... public static readonly DependencyProperty MyCollectionProperty = DependencyProperty.Register("MyCollection", typeof(ObservableCollection<string>), typeof(UserControl1), new UIPropertyMetadata(new ObservableCollection<string>())); public UserControl1() { for (int i = 0; i < 6; i++) MyCollection.Add("String " + i.ToString()); InitializeComponent(); myItemsControl.DataContext = this.MyCollection; } private void Button_Click(object sender, RoutedEventArgs e) { // Insert a string after the third element of MyCollection MyCollection.Insert(3, "Inserted Item"); // Display contents of MyCollection in a MessageBox string str = ""; foreach (string s in MyCollection) str += s + Environment.NewLine; MessageBox.Show(str); } } } And finally, here's the xaml for the main window: <Window x:Class="ItemsControlTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:src="clr-namespace:ItemsControlTest" Title="Window1" Height="300" Width="300"> <Grid> <src:UserControl1 /> </Grid> </Window> Well, that's everything. I'm not sure why editing the TextBox.Text properties in the window does not seem to update the source property for the binding in the code behind, namely MyCollection. Clicking on the button pretty much causes the problem to stare me in the face;) Please help me understand where I'm going wrong. Thanx! Andrew

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • C++ simple logging class with UTF-8 output [code example]

    - by Andrew
    Hello everyone, I was working on one of my academic projects and for the first time I needed pure C++ without GUI. After googling for a while, I did not find any simple and easy to use implementation for logging and created my own. This is a simple implementation with iostreams that logs messages to screen and to the file simultaneously. I was thinking of using templates but then I realized that I do not expect any changes and removed that. It is modified std::wostream with two added modifiers: 1. TimeStamp - prints time-stamp 2. LogMode(LogModes) - switches output: file only, screen only, file+screen. *Boost::utf8_codecvt_facet* is used for UTF-8 output. // ############################################################################ // # Name: MyLog.h # // # Purpose: Logging Class Header # // # Author: Andrew Drach # // # Modified by: <somebody> # // # Created: 03/21/10 # // # SVN-ID: $Id$ # // # Copyright: (c) 2010 Andrew Drach # // # Licence: <license> # // ############################################################################ #ifndef INCLUDED_MYLOG_H #define INCLUDED_MYLOG_H // headers -------------------------------------------------------------------- #include <string> #include <iostream> #include <fstream> #include <exception> #include <boost/program_options/detail/utf8_codecvt_facet.hpp> using namespace std; // definitions ---------------------------------------------------------------- // ---------------------------------------------------------------------------- // DblBuf class // Splits up output stream into two // Inspired by http://wordaligned.org/articles/cpp-streambufs // ---------------------------------------------------------------------------- class DblBuf : public wstreambuf { private: // private member declarations DblBuf(); wstreambuf *bf1; wstreambuf *bf2; virtual int_type overflow(int_type ch) { int_type eof = traits_type::eof(); int_type not_eof = !eof; if ( traits_type::eq_int_type(ch,eof) ) return not_eof; else { char_type ch1 = traits_type::to_char_type(ch); int_type r1( bf1on ? bf1->sputc(ch1) : not_eof ); int_type r2( bf2on ? bf2->sputc(ch1) : not_eof ); return (traits_type::eq_int_type(r1,eof) || traits_type::eq_int_type(r2,eof) ) ? eof : ch; } } virtual int sync() { int r1( bf1on ? bf1->pubsync() : NULL ); int r2( bf2on ? bf2->pubsync() : NULL ); return (r1 == 0 && r2 == 0) ? 0 : -1; } public: // public member declarations explicit DblBuf(wstreambuf *bf1, wstreambuf *bf2) : bf1(bf1), bf2(bf2) { if (bf1) bf1on = true; else bf1on = false; if (bf2) bf2on = true; else bf2on = false; } bool bf1on; bool bf2on; }; // ---------------------------------------------------------------------------- // logstream class // Wrapper for a standard wostream with access to modified buffer // ---------------------------------------------------------------------------- class logstream : public wostream { private: // private member declarations logstream(); public: // public member declarations DblBuf *buf; explicit logstream(wstreambuf *StrBuf, bool isStd = false) : wostream(StrBuf, isStd), buf((DblBuf*)StrBuf) {} }; // ---------------------------------------------------------------------------- // Logging mode Class // ---------------------------------------------------------------------------- enum LogModes{LogToFile=1, LogToScreen, LogToBoth}; class LogMode { private: // private member declarations LogMode(); short mode; public: // public member declarations LogMode(short mode1) : mode(mode1) {} logstream& operator()(logstream &stream1) { switch(mode) { case LogToFile: stream1.buf->bf1on = true; stream1.buf->bf2on = false; break; case LogToScreen: stream1.buf->bf1on = false; stream1.buf->bf2on = true; break; case LogToBoth: stream1.buf->bf1on = true; stream1.buf->bf2on = true; } return stream1; } }; logstream& operator<<(logstream &out, LogMode mode) { return mode(out); } wostream& TimeStamp1(wostream &out1) { time_t time1; struct tm timeinfo; wchar_t timestr[512]; // Get current time and convert it to a string time(&time1); localtime_s (&timeinfo, &time1); wcsftime(timestr, 512,L"[%Y-%b-%d %H:%M:%S %p] ",&timeinfo); return out1 << timestr; } // ---------------------------------------------------------------------------- // MyLog class // Logs events to both file and screen // ---------------------------------------------------------------------------- class MyLog { private: // private member declarations MyLog(); auto_ptr<DblBuf> buf; string mErrorMsg1; string mErrorMsg2; string mErrorMsg3; string mErrorMsg4; public: // public member declarations explicit MyLog(string FileName1, wostream *ScrLog1, locale utf8locale1); ~MyLog(); void NewEvent(wstring str1, bool TimeStamp = true); string FileName; wostream *ScrLog; wofstream File; auto_ptr<logstream> Log; locale utf8locale; }; // ---------------------------------------------------------------------------- // MyLog constructor // ---------------------------------------------------------------------------- MyLog::MyLog(string FileName1, wostream *ScrLog1, locale utf8locale1) : // ctors mErrorMsg1("Failed to open file for application logging! []"), mErrorMsg2("Failed to write BOM! []"), mErrorMsg3("Failed to write to file! []"), mErrorMsg4("Failed to close file! []"), FileName(FileName1), ScrLog(ScrLog1), utf8locale(utf8locale1), File(FileName1.c_str()) { // Adjust error strings mErrorMsg1.insert(mErrorMsg1.length()-1,FileName1); mErrorMsg2.insert(mErrorMsg2.length()-1,FileName1); mErrorMsg3.insert(mErrorMsg3.length()-1,FileName1); mErrorMsg4.insert(mErrorMsg4.length()-1,FileName1); // check for file open errors if ( !File ) throw ofstream::failure(mErrorMsg1); // write UTF-8 BOM File << wchar_t(0xEF) << wchar_t(0xBB) << wchar_t(0xBF); // switch locale to UTF-8 File.imbue(utf8locale); // check for write errors if ( File.bad() ) throw ofstream::failure(mErrorMsg2); buf.reset( new DblBuf(File.rdbuf(),ScrLog->rdbuf()) ); Log.reset( new logstream(&*buf) ); } // ---------------------------------------------------------------------------- // MyLog destructor // ---------------------------------------------------------------------------- MyLog::~MyLog() { *Log << TimeStamp1 << "Log finished." << endl; // clean up objects Log.reset(); buf.reset(); File.close(); // check for file close errors if ( File.bad() ) throw ofstream::failure(mErrorMsg4); } //--------------------------------------------------------------------------- #endif // INCLUDED_MYLOG_H Tested on MSVC 2008, boost 1.42. I do not know if this is the right place to share it. Hope it helps anybody. Feel free to make it better.

    Read the article

  • How do I export calendar events in Mozilla lightning

    - by Andrew Grimm
    How do I export calendar events in Mozilla Lightning? I'm using Thunderbird 3.0.4. (Sorry for such a basic question, but clicking on "Help contents" takes me to http://support.mozillamessaging.com/en-US/kb/ , and searching the knowledge base for lightning export got zero hits, and searching for export only got one irrelevant hit)

    Read the article

  • Permissions problems with Apache / SVN

    - by Fred Wuerges
    I am installed a SVN server (v1.6) on a VPS contracted with CentOS 5, Apache 2.2 with WHM panel. I installed and configured all necessary modules and am able to create and access repositories via my web browser normally. The problem: I can not commit or import anything, always return permission errors: First error: Can not open file '/var/www/svn/test/db/txn-current-lock': Permission denied After fix the previous error: Can't open '/var/www/svn/test/db/tempfile.tmp': Permission denied And other... (and happends many others) Can't open file '/var/www/svn/test/db/txn-protorevs/0-1m.rev': Permission denied I've read and executed permissions on numerous tutorials regarding this errors, all without success. I've defined the owner as apache or nobody and different permissions for folders and files. I'm using TortoiseSVN to connect to the server. Some information that may find useful: I'm trying to perform commit through an external HTTP connection, like: svn commit http://example.com/svn/test SELinux is disabled. sestatus returns SELinux status: disabled Running the command to see the active processes of Apache, some processes are left with user/group "nobody". I tried changing the settings of Apache to not run with that user/group, but all my websites stopped working, returning this error: Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Apache process list: root@vps [/var/www]# ps aux | egrep '(apache|httpd)' root 19904 0.0 4.4 133972 35056 ? Ss 16:58 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20401 0.0 3.5 133972 27772 ? S 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 20409 0.0 3.4 133972 27112 ? S 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20410 0.0 3.8 190040 30412 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20412 0.0 3.9 190344 30944 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20414 0.0 4.4 190160 35364 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20416 0.0 4.0 190980 32108 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20418 0.3 5.3 263028 42328 ? Sl 17:01 0:12 /usr/local/apache/bin/httpd -k start -DSSL root 32409 0.0 0.1 7212 816 pts/0 R+ 17:54 0:00 egrep (apache|httpd) SVN folder permission var/www/: drwxrwxr-x 3 apache apache 4096 Dec 11 16:41 svn/ Repository permission var/www/svn/: drwxrwxr-x 6 apache apache 4096 Dec 11 16:41 test/ Internal folders of repository var/www/svn/test: drwxrwxr-x 2 apache apache 4096 Dec 11 16:41 conf/ drwxrwxr-x 6 apache apache 4096 Dec 11 16:41 db/ -rwxrwxr-x 1 apache apache 2 Dec 11 16:41 format* drwxrwxr-x 2 apache apache 4096 Dec 11 16:41 hooks/ drwxrwxr-x 2 apache apache 4096 Dec 11 16:41 locks/ -rwxrwxr-x 1 apache apache 229 Dec 11 16:41 README.txt*

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • Why isn't this rewrite rule (nginx) applied? (trying to setup Wordpress multisite)

    - by Brian Park
    Hi, I'm trying to setup Wordpress multisite (subfolder structure) with nginx, but having a problem with this rewrite rule. Below is the Apache's .htaccess, which I have to translate into nginx configuration. RewriteEngine On RewriteBase /blogs/ RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Below is what I came up with: server { listen 80; server_name example.com; server_name_in_redirect off; expires 1d; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public; index index.html; try_files $uri $uri/ /index.html; # rewriting uploaded files rewrite ^/blogs/(.+/)?files/(.+) /blogs/wp-includes/ms-files.php?file=$2 last; # add a trailing slash to /wp-admin rewrite ^/blogs/(.+/)?wp-admin$ /blogs/$1wp-admin/ permanent; if (!-e $request_filename) { rewrite ^/blogs/(.+/)?(wp-(content|admin|includes).*) /blogs/$2 last; rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; } location /blogs/ { index index.php; #try_files $uri $uri/ /blogs/index.php?q=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name; } # static assets location ~* ^.+\.(manifest)$ { access_log /srv/www/example.com/logs/static.log; } location ~* ^.+\.(ico|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { # only set expires max IFF the file is a static file and exists if (-f $request_filename) { expires max; access_log /srv/www/example.com/logs/static.log; } } } In the above code, I believe rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; has no effect because when I look at the access_log file, I see the following line: 2010/09/15 01:14:55 [error] 10166#0: *8 "/srv/www/example.com/public/blogs/test/index.php" is not found (2: No such file or directory), request: "GET /blogs/test/ HTTP/1.1" (Here, 'test' is the second blog created using multisite feature) What I'm expecting is that /blogs/test/index.php gets rewritten to /blogs/index.php, but it doesn't seem to do that... Am I overlooking something obvious? Thanks!

    Read the article

  • Apache refusing to change DocumentRoot

    - by mingos
    I've installed Zend Server CE 5.1.0 on Windows 7 Ultimate 64 bit in its default location, meaning the path to my htdocs is C:\Program Files (x86)\Zend\Apache2\htdocs. Not something that I would like to type each time I check out a project from SVN in Eclipse or something. I'd like to set the DocumentRoot to a different folder, namely D:\www. What I've done I edited conf/httpd.conf, with the significant lines being: DocumentRoot "D:\www" <Directory "D:\www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> Include conf/extra/httpd-vhosts.conf I edited conf/extra/httpd-vhosts.conf to add a virtual host: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot D:\www ServerName localhost ServerAlias localhost SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN localhost </VirtualHost> <VirtualHost *:80> DocumentRoot D:\www\UmbraCMS ServerName umbracms.local ServerAlias umbracms.local SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN umbracms.local </VirtualHost> I edited C:\Windows\System32\drivers\etc\hosts to add this line: 127.0.0.1 umbracms.local And I also added a PHP project to D:\www\UmbraCMS. And restarted Apache. Actually, I restarted the computer, too, just in case. What's supposed to happen After typing http://umbracms.local/ in the browser's address bar, I want to see my PHP project launch, obviously. What's actually happening No matter whether whether I type http://umbracms.local/ or http://localhost/, I'm taken to the test zend page, located in C:\Program Files (x86)\Zend\Apache2\htdocs\index.html, as if neither DocumentRoot was changed nor name-based virtual hosting worked. Interestingly, when I put another project in C:\Program Files (x86)\Zend\Apache2\htdocs\bugraid\ and then, in the browser, typed http://localhost/bugraid, the project actually opened, or at least tried to, as it completely ignored the project's .htaccess file. Extra considerations Zend Server's Apache version is 2.2.16, PHP version is 5.3.0 I've installed MySQL CE 5.5.13 separately, and it works, both from command line and via MySQL Workbench. I have XAMPP installed, but none of its components are started up. It's got its own install of Apache 2.2.17 and MySQL 5.5.1. PHP version is 5.3.5 (I think). Question Have you had a similar situation before? What else might need taking care of in order to have Zend Server's Apache use D:\www as document root for my PHP projects?

    Read the article

  • How to test if DNS information has propagated?

    - by Andrew
    I set up a new DNS entry for one of my subdomains (I haven't set up any Apache virtual hosts or anything like that yet). How can I check that the DNS information has propagated? I assumed that I could simply ping my.subdomain.com and assume that if it could resolve, it would show the IP address I specified in the A record. However, I don't know if I am assuming correctly. What is the best way to check this information?

    Read the article

  • Nginx + Passenger running a RoR app is returning 401 when 302 is expected

    - by DBruns
    I've got a RoR app running on Passenger on top of Nginx. I'm using devise for my authentication method and have a link that gets sent in an email to users that requires authentication to view. If a user clicks the link from Outlook, and IE is the default browser, IE makes an HTTP request using the following headers: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 401 Unauthorized Content-Type: /; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 401 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 WWW-Authenticate: Basic realm="Application" Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _vxwer_session=[sessionstr]; path=/; HttpOnly X-Runtime: 0.011918 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 31 You need to sign in or sign up before continuing. 0 When the exact same URL is typed into the address bar, it does this: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 Location: http://www.company.com/users/sign_in Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _xswer_session=[session_info_here]; path=/; HttpOnly X-Runtime: 0.010798 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 6f <html><body>You are being <a href="http://www.company.com/users/sign_in">redirected</a>.</body></html> 0 I expect them to return the same thing regardless.

    Read the article

  • understanding my site's DNS records

    - by DaveM
    firstly apologies for using the word 'pointage' this is the word my french domain registrar uses so I may have used to wrong term. OK I would like to better understand what is going on on my 'pointage' record on my domain registrars site. for my (currently empty) web site it reports the following details... Type : Host : Destination A : www.mydomain.org : 62.210.176.146 A : mail.mydomain.org : 84.246.225.176 Mx : .mydomain.org : mail.mydomain.org I think I understand the MX record, that simply relays anything onto the mail.mydomain.org location. However why are the destination for the www and mail domains different. Even more confusing (for me) is the fact that if I ping either of www.mydomain.org or mail.mydomain.org the ping returns a different IP address. This IP address is consistent with that of my server (ie 92.39.247.92). So what exactly is going on ? I'm sure I could find the information on the web,I've read a few thing on the debianhelp site regarding DNS records, and it seems to suggest that the record should be a reverse lookup, but certains isn't the reverse of my servers IP ? but I don't what I should be looking for, so links to docs and search terms for google will be happily accepteed (even though they go against the grain of SO answers to question). thanks in advance. David. ps. I should add that everything seems to work just fine, and I've just descovered this part of the management page of my registrar. Edit: Addition of DNS records and ping results. The DNS record for the site. From what I've read there should only realy be a single 'A' record, so has something gone wrong ? should I change it (remove the extras and then just point www.facilitee.org - .facilitee.org and mail.facilitee.org - .facilitee.org here is the DNS record A www.facilitee.org ? 92.39.247.92 A .facilitee.org ? 92.39.247.92 A mail.facilitee.org ? 92.39.247.92 A webmail.facilitee.org ? 92.39.247.92 MX .facilitee.org ? mail.facilitee.org ping results... ~$ ping www.facilitee.org PING www.facilitee.org (92.39.247.92) 56(84) bytes of data. 64 bytes from vps4576-cloud.dns26.com (92.39.247.92): ~$ ping mail.facilitee.org PING mail.facilitee.org (92.39.247.92) 56(84) bytes of data. 64 bytes from vps4576-cloud.dns26.com (92.39.247.92): So the DNS and the ping correspond, but the 'pointage' doesn't. ~ how can I get a report of the pointage records other than from my registrar ?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >