Search Results

Search found 12325 results on 493 pages for 'remote execution'.

Page 219/493 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Does RVM "failover" to another ruby instance on error?

    - by JohnMetta
    Have a strange problem in that I have a Rake task that seems to be using multiple versions of Ruby. When one fails, it seems to try another one. Details MacBook running 10.6.5 rvm 1.1.0 Rubies: 1.8.7-p302, ree-1.8.7-2010.02, ruby-1.9.2-p0 Rake 0.8.7 Gem 1.3.7 Veewee (provisioning Virtual Machines using Opcode.com, Vagrant and Chef) I'm not entirely sure the specific details of the error matter, but since it might be an issue with Veewee itself. So, what I'm trying to do is build a new box base on a veewee definition. The command fails with an error about a missing method- but what's interesting is how it fails. Errors I managed to figure out that if I only have one Ruby installed with RVM, it just fails. If I have more than one Ruby install, it fails at the same place, but execution seems to continue in another interpreter. Here are two different clipped console outputs. I've clipped them for size. The full outputs of each error are available as a gist. One Ruby version installed Here is the command run when I only have a single version of Ruby (1.8.7) available in RVM boudica:veewee john$ rvm rake build['mettabox'] --trace rvm 1.1.0 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build … creating new harddrive rake aborted! undefined method `max_vdi_size' for #<VirtualBox::SystemProperties:0x102d6af80> /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/virtualbox-0.8.3/lib/virtualbox/abstract_model/dirty.rb:172:in `method_missing' <------ stacktraces cut ----------> /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19 Multiple Ruby Versions Here is the same command run with three versions of Ruby available in RVM. Prior to doing this, I used "rvm use 1.8.7." Again, I don't know how important the details of the specific errors are- what's interesting to me is that there are three separate errors- each with it's own stacktrace- and each in a different Ruby interpreter. Look at the bottom of each stacktrace and you'll see that they are all sourced from different interpreter locations- First ree-1.8.7, then ruby-1.8.7, then ruby-1.9.2: boudica:veewee john$ rvm rake build['mettabox'] --trace rvm 1.1.0 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build … creating new harddrive rake aborted! undefined method `max_vdi_size' for #<VirtualBox::SystemProperties:0x1059dd608> /Users/john/.rvm/gems/ree-1.8.7-2010.02/gems/virtualbox-0.8.3/lib/virtualbox/abstract_model/dirty.rb:172:in `method_missing' … /Users/john/.rvm/gems/ree-1.8.7-2010.02/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ree-1.8.7-2010.02@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ree-1.8.7-2010.02@global/bin/rake:19 (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build isofile ubuntu-10.04.1-server-amd64.iso is available ["a1b857f92eecaf9f0a31ecfc39dee906", "30b5c6fdddbfe7b397fe506400be698d"] [] Last good state: -1 Current step: 0 last good state -1 destroying machine+disks (re-)executing step 0-initial-a1b857f92eecaf9f0a31ecfc39dee906 VBoxManage: error: Machine settings file '/Users/john/VirtualBox VMs/mettabox/mettabox.vbox' already exists VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Machine, interface IMachine, callee nsISupports Context: "CreateMachine(bstrSettingsFile.raw(), name.raw(), osTypeId.raw(), Guid(id).toUtf16().raw(), FALSE , machine.asOutParam())" at line 247 of file VBoxManageMisc.cpp rake aborted! undefined method `memory_size=' for nil:NilClass /Users/john/Work/veewee/lib/veewee/session.rb:303:in `create_vm' /Users/john/Work/veewee/lib/veewee/session.rb:166:in `build' /Users/john/Work/veewee/lib/veewee/session.rb:560:in `transaction' /Users/john/Work/veewee/lib/veewee/session.rb:163:in `build' /Users/john/Work/veewee/Rakefile:87 /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:636:in `call' /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:631:in `each' … /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19 (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build isofile ubuntu-10.04.1-server-amd64.iso is available ["a9c4ab3257e1da3479c984eae9905c2a", "30b5c6fdddbfe7b397fe506400be698d"] [] Last good state: -1 Current step: 0 last good state -1 (re-)executing step 0-initial-a9c4ab3257e1da3479c984eae9905c2a VBoxManage: error: Machine settings file '/Users/john/VirtualBox VMs/mettabox/mettabox.vbox' already exists VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Machine, interface IMachine, callee nsISupports Context: "CreateMachine(bstrSettingsFile.raw(), name.raw(), osTypeId.raw(), Guid(id).toUtf16().raw(), FALSE , machine.asOutParam())" at line 247 of file VBoxManageMisc.cpp rake aborted! undefined method `memory_size=' for nil:NilClass /Users/john/Work/veewee/lib/veewee/session.rb:303:in `create_vm' /Users/john/Work/veewee/lib/veewee/session.rb:166:in `block in build' /Users/john/Work/veewee/lib/veewee/session.rb:560:in `transaction' /Users/john/Work/veewee/lib/veewee/session.rb:163:in `build' /Users/john/Work/veewee/Rakefile:87:in `block in <top (required)>' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:634:in `call' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:634:in `block in execute' … /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:2013:in `top_level' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:1992:in `run' /Users/john/.rvm/rubies/ruby-1.9.2-p0/bin/rake:35:in `<main>' It isn't until we reach the last installed version of Ruby that execution halts. Discussion Does anyone have any idea what's going on here? Has anyone seen this "failover"-like behavior before? It seems strange to me that the first exception would not halt execution as it did with one interpreter, but I wonder if there are things happening when RVM is installed that we Ruby developers are not considering.

    Read the article

  • Maven does not resolve a local Grails plug-in

    - by Drew
    My goal is to take a Grails web application and build it into a Web ARchive (WAR file) using Maven, and the key is that it must populate the "plugins" folder without live access to the internet. An "out of the box" Grails webapp will already have the plugins folder populated with JAR files, but the maven build script should take care of populating it, just like it does for any traditional WAR projects (such as WEB-INF/lib/ if it's empty) This is an error when executing mvn grails:run-app with Grails 1.1 using Maven 2.0.10 and org.grails:grails-maven-plugin:1.0. (This "hibernate-1.1" plugin is needed to do GORM.) [INFO] [grails:run-app] Running pre-compiled script Environment set to development Plugin [hibernate-1.1] not installed, resolving.. Reading remote plugin list ... Error reading remote plugin list [svn.codehaus.org], building locally... Unable to list plugins, please check you have a valid internet connection: svn.codehaus.org Reading remote plugin list ... Error reading remote plugin list [plugins.grails.org], building locally... Unable to list plugins, please check you have a valid internet connection: plugins.grails.org Plugin 'hibernate' was not found in repository. If it is not stored in a configured repository you will need to install it manually. Type 'grails list-plugins' to find out what plugins are available. The build machine does not have access to the internet and must use an internal/enterprise repository, so this error is just saying that maven can't find the required artifact anywhere. That dependency is already included with the stock Grails software that's installed locally, so I just need to figure out how to get my POM file to unpackage that ZIP file into my webapp's "plugins" folder. I've tried installing the plugin manually to my local repository and making it an explicit dependency in POM.xml, but it's still not being recognized. Maybe you can't pull down grails plugins like you would a standard maven reference? mvn install:install-file -DgroupId=org.grails -DartifactId=grails-hibernate -Dversion=1.1 -Dpackaging=zip -Dfile=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip I can manually setup the Grails webapp from the command-line, which creates that local ./plugins folder properly. This is a step in the right direction, so maybe the question is: how can I incorporate this goal into my POM? mvn grails:install-plugin -DpluginUrl=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip Here is a copy of my POM.xml file, which was generated using an archetype. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.samples</groupId> <artifactId>sample-grails</artifactId> <packaging>war</packaging> <name>Sample Grails webapp</name> <properties> <sourceComplianceLevel>1.5</sourceComplianceLevel> </properties> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.grails</groupId> <artifactId>grails-crud</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>org.grails</groupId> <artifactId>grails-gorm</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>opensymphony</groupId> <artifactId>oscache</artifactId> <version>2.4</version> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>1.8.0.7</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.6</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <!-- <dependency> <groupId>org.grails</groupId> <artifactId>grails-hibernate</artifactId> <version>1.1</version> <type>zip</type> </dependency> --> </dependencies> <build> <pluginManagement /> <plugins> <plugin> <groupId>org.grails</groupId> <artifactId>grails-maven-plugin</artifactId> <version>1.0</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>init</goal> <goal>maven-clean</goal> <goal>validate</goal> <goal>config-directories</goal> <goal>maven-compile</goal> <goal>maven-test</goal> <goal>maven-war</goal> <goal>maven-functional-test</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${sourceComplianceLevel}</source> <target>${sourceComplianceLevel}</target> </configuration> </plugin> </plugins> </build> </project>

    Read the article

  • Credentials can not be delegated - Alfresco Share

    - by leftcase
    I've hit a brick wall configuring Alfresco 4.0.d on Redhat 6. I'm using Kerberos authentication, it seems to be working normally, and single sign on is working on the main alfresco app itself. I've been through the configuration steps to get the share app working, but try as I may, I keep getting this error in catalina.out each time a browser accesses http://server:8080/share along with a 'Windows Security' password box. WARN [site.servlet.KerberosSessionSetupPrivilegedAction] credentials can not be delegated! Here's what I've done so far: Using AD users and computers, selected the alfrescohttp account, and selected 'trust this user for delegation to any service (Kerberos only). Copied /opt/alfresco-4.0.d/tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml.sample to share-config-custom.xml and edited like this: <config evaluator="string-compare" condition="Kerberos" replace="true"> <kerberos> <password>*****</password> <realm>MYDOMAIN.CO.UK</realm> <endpoint-spn>HTTP/[email protected]</endpoint-spn> <config-entry>ShareHTTP</config-entry> </kerberos> </config> <config evaluator="string-compare" condition="Remote"> <remote> <keystore> <path>alfresco/web-extension/alfresco-system.p12</path> <type>pkcs12</type> <password>alfresco-system</password> </keystore> <connector> <id>alfrescoCookie</id> <name>Alfresco Connector</name> <description>Connects to an Alfresco instance using cookie-based authentication</description> <class>org.springframework.extensions.webscripts.connector.AlfrescoConnector</class> </connector> <endpoint> <id>alfresco</id> <name>Alfresco - user access</name> <description>Access to Alfresco Repository WebScripts that require user authentication</description> <connector-id>alfrescoCookie</connector-id> <endpoint-url>http://localhost:8080/alfresco/wcs</endpoint-url> <identity>user</identity> <external-auth>true</external-auth> </endpoint> </remote> </config> Setup the /etc/krb5.conf file like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = MYDOMAIN.CO.UK default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac forwardable = true proxiable = true [realms] MYDOMAIN.CO.UK = { kdc = mydc.mydomain.co.uk admin_server = mydc.mydomain.co.uk } [domain_realm] .mydc.mydomain.co.uk = MYDOMAIN.CO.UK mydc.mydomain.co.uk = MYDOMAIN.CO.UK /opt/alfresco-4.0.d/java/jre/lib/security/java.login.config is configured like this: Alfresco { com.sun.security.auth.module.Krb5LoginModule sufficient; }; AlfrescoCIFS { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescocifs.keytab" principal="cifs/server.mydomain.co.uk"; }; AlfrescoHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; com.sun.net.ssl.client { com.sun.security.auth.module.Krb5LoginModule sufficient; }; other { com.sun.security.auth.module.Krb5LoginModule sufficient; }; ShareHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; And finally, the following settings in alfresco-global.conf authentication.chain=kerberos1:kerberos,alfrescoNtlm1:alfrescoNtlm kerberos.authentication.real=MYDOMAIN.CO.UK kerberos.authentication.user.configEntryName=Alfresco kerberos.authentication.cifs.configEntryName=AlfrescoCIFS kerberos.authentication.http.configEntryName=AlfrescoHTTP kerberos.authentication.cifs.password=****** kerberos.authentication.http.password=***** kerberos.authentication.defaultAdministratorUserNames=administrator ntlm.authentication.sso.enabled=true As I say, I've hit a brick wall with this and I'd really appreciate any help you can give me! This question is also posted on the Alfresco forum, but I wondered if any folk here on serverfault have come across similar implementation challenges?

    Read the article

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • Can't log in via SSH to any accounts set to use /bin/bash as a default shell

    - by Gui Ambros
    I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash $ ssh root@NAS root@NAS's password: Last login: Sun Dec 16 14:06:56 2012 from desktop # 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash $ ssh joeuser@localhost joeuser@NAS's password: Last login: Sun Dec 16 14:07:22 2012 from desktop Permission denied, please try again. Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh $ ssh joeuser@localhost Last login: Sun Dec 16 15:50:52 2012 from localhost $ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be anything permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config: LogLevel DEBUG LoginGraceTime 2m PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys ChallengeResponseAuthentication no UsePAM yes AllowTcpForwarding no ChrootDirectory none Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d, showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: HPN Buffer Size: 87380 debug1: sshd version OpenSSH_5.8p1-hpn13v11 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: rexec_argv[0]='/usr/syno/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on ::. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on 0.0.0.0 port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9 debug1: inetd sockets after dupping: 4, 4 Connection from 127.0.0.1 port 52212 debug1: HPN Disabled: 0, HPN Buffer Size: 87380 debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11 SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11 debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11 debug1: permanently_set_uid: 1024/100 debug1: MYFLAG IS 1 debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: AUTH STATE IS 0 debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: client->server aes128-ctr hmac-md5 none SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: server->client aes128-ctr hmac-md5 none debug1: expecting SSH2_MSG_KEX_ECDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user joeuser service ssh-connection method none SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser debug1: attempt 0 failures 0 debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: PAM: initializing for "joeuser" debug1: PAM: setting PAM_RHOST to "localhost" debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user joeuser service ssh-connection method password debug1: attempt 1 failures 0 debug1: do_pam_account: called Accepted password for joeuser from 127.0.0.1 port 52212 ssh2 debug1: monitor_child_preauth: joeuser has been authenticated by privileged process debug1: PAM: establishing credentials User child is on pid 9129 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_new: session 0 debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 9130 debug1: session_exit_message: session 0 channel 0 pid 9130 debug1: session_exit_message: release channel 0 debug1: session_by_tty: session 0 tty /dev/pts/1 debug1: session_pty_cleanup: session 0 release /dev/pts/1 Received disconnect from 127.0.0.1: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv. Bash: # bash --version GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi) Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution, but had the exact same problem. I checked for missing shared libraries using objdump -x, but they're all there. Any ideas what could be causing this "Permission denied, please try again."? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly.

    Read the article

  • OpenVPN stopped working, what could have happened?

    - by jaja
    I have Openvpn, and it worked great when I used it on PC (Windows 8), then I copied all files (Certificates and config) to an Android 4 phone to use them. Now, Openvpn works on the phone, but not the PC. Specifically, when I open Google I get: The server at www.google.com can't be found, because the DNS lookup failed, but the VPN seems to be connected. I have a simple question, could the problem be because I copied the same files? Routing table before connecting:- IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.101 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.101 281 192.168.1.101 255.255.255.255 On-link 192.168.1.101 281 192.168.1.255 255.255.255.255 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.101 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.101 281 =========================================================================== Routing table after connecting:- IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.101 25 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 **.**.***.** 255.255.255.255 192.168.1.254 192.168.1.101 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.101 281 192.168.1.101 255.255.255.255 On-link 192.168.1.101 281 192.168.1.255 255.255.255.255 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.101 281 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Server conf:- port 1194 proto udp dev tun ca ca.crt cert myservername.crt key myservername.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 push "redirect-gateway def1" Client conf:- client dev tun proto udp remote 89.32.148.35 1194 resolv-retry infinite nobind persist-key persist-tun mute-replay-warnings ca ca.crt cert client1.crt key client1.key verb 3 comp-lzo redirect-gateway def1 Here is the log file:- Tue Dec 18 16:34:27 2012 OpenVPN 2.2.2 Win32-MSVC++ [SSL] [LZO2] [PKCS11] built on Dec 15 2011 Tue Dec 18 16:34:27 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Tue Dec 18 16:34:27 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Dec 18 16:34:27 2012 LZO compression initialized Tue Dec 18 16:34:27 2012 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue Dec 18 16:34:27 2012 Socket Buffers: R=[65536-65536] S=[65536-65536] Tue Dec 18 16:34:27 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Tue Dec 18 16:34:27 2012 Local Options hash (VER=V4): '41690919' Tue Dec 18 16:34:27 2012 Expected Remote Options hash (VER=V4): '530fdded' Tue Dec 18 16:34:27 2012 UDPv4 link local: [undef] Tue Dec 18 16:34:27 2012 UDPv4 link remote: ..*.:1194 Tue Dec 18 16:34:27 2012 TLS: Initial packet from ..*.:1194, sid=4d1496ad 2079a5fa Tue Dec 18 16:34:28 2012 VERIFY OK: depth=1, /C=/ST=/L=/O=/OU=/CN=/name=/emailAddress= Tue Dec 18 16:34:28 2012 VERIFY OK: depth=0, /C=/ST=/L=/O=/OU=/CN=/name=/emailAddress= Tue Dec 18 16:34:29 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue Dec 18 16:34:29 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Dec 18 16:34:29 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue Dec 18 16:34:29 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Dec 18 16:34:29 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Tue Dec 18 16:34:29 2012 [myservername] Peer Connection Initiated with ..*.:1194 Tue Dec 18 16:34:32 2012 SENT CONTROL [myservername]: 'PUSH_REQUEST' (status=1) Tue Dec 18 16:34:32 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 10.8.0.1,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: timers and/or timeouts modified Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: --ifconfig/up options modified Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: route options modified Tue Dec 18 16:34:32 2012 ROUTE default_gateway=192.168.1.254 Tue Dec 18 16:34:32 2012 TAP-WIN32 device [Local Area Connection] opened: \.\Global{F0CFEBBF-9B1B-4CFB-8A82-027330974C30}.tap Tue Dec 18 16:34:32 2012 TAP-Win32 Driver Version 9.9 Tue Dec 18 16:34:32 2012 TAP-Win32 MTU=1500 Tue Dec 18 16:34:32 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 10.8.0.6/255.255.255.252 on interface {F0CFEBBF-9B1B-4CFB-8A82-027330974C30} [DHCP-serv: 10.8.0.5, lease-time: 31536000] Tue Dec 18 16:34:32 2012 Successful ARP Flush on interface [26] {F0CFEBBF-9B1B-4CFB-8A82-027330974C30} Tue Dec 18 16:34:37 2012 TEST ROUTES: 2/2 succeeded len=1 ret=1 a=0 u/d=up Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD ..*. MASK 255.255.255.255 192.168.1.254 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=25 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 0.0.0.0 MASK 128.0.0.0 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 128.0.0.0 MASK 128.0.0.0 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 10.8.0.1 MASK 255.255.255.255 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 Initialization Sequence Completed

    Read the article

  • Ubuntu 10.04 recognizing USB 2.0 external HD as USB 1.1

    - by btucker
    When I connect the USB 2.0 drive I see this: usb 1-4.3: new full speed USB device using ohci_hcd and address 5 so I know it's getting seen as USB 1.1. usb-devices shows that it really is USB 2.0 and connected to a USB 2.0 hub: T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=05e3 ProdID=0608 Rev=77.61 S: Product=USB2.0 Hub C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=13fd ProdID=1340 Rev=02.10 S: Manufacturer=Generic S: Product=External C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage It seems the problem is that root hub is: T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=02.06 S: Manufacturer=Linux 2.6.32-25-server ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub And there's no mention of ehci_hcd. lsusb -t gives me: /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ohci_hcd/10p, 12M |__ Port 4: Dev 2, If 0, Class=hub, Driver=hub/4p, 12M |__ Port 2: Dev 4, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 3: Dev 5, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 6: Dev 3, If 0, Class=stor., Driver=usb-storage, 12M It seems like I'm missing something which would allow the OS to see USB 2.0 devices. Can anyone point me in the right direction? EDIT Full lsusb -v output: Bus 001 Device 005: ID 13fd:1340 Initio Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x13fd Initio Corporation idProduct 0x1340 bcdDevice 2.10 iManufacturer 1 Generic iProduct 2 External iSerial 3 57442D574341595930323337 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x05e3 Genesys Logic, Inc. idProduct 0x0608 USB-2.0 4-Port HUB bcdDevice 77.61 iManufacturer 0 iProduct 1 USB2.0 Hub iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x00e0 Ganged power switching Ganged overcurrent protection Port indicators bPwrOn2PwrGood 50 * 2 milli seconds bHubContrCurrent 100 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0103 power enable connect Port 3: 0000.0103 power enable connect Port 4: 0000.0100 power Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.32-25-server ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:02.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 11 bDescriptorType 41 nNbrPorts 10 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 0x00 PortPwrCtrlMask 0xff 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0103 power enable connect Port 5: 0000.0100 power Port 6: 0000.0103 power enable connect Port 7: 0000.0100 power Port 8: 0000.0100 power Port 9: 0000.0100 power Port 10: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled

    Read the article

  • email-spec destroys my rake cucumber:all

    - by Leonardo Dario Perna
    This works fine: $ rake cucumber:all Then $ script/plugin install git://github.com/bmabey/email-spec.git remote: Counting objects: 162, done. remote: Compressing objects: 100% (130/130), done. remote: Total 162 (delta 18), reused 79 (delta 13) Receiving objects: 100% (162/162), 127.65 KiB | 15 KiB/s, done. Resolving deltas: 100% (18/18), done. From git://github.com/bmabey/email-spec * branch HEAD - FETCH_HEAD And $ script/generate email_spec exists features/step_definitions create features/step_definitions/email_steps.rb And I add 'require 'email_spec/cucumber' in /feature/support/env.rb so it looks somethinng like: require File.expand_path(File.dirname(__FILE__) + '/../../config/environment') require 'cucumber/rails/world' require 'cucumber/formatter/unicode' # Comment out this line if you don't want Cucumber Unicode support require 'email_spec/cucumber' and now: rake cucumber:all gives me this error: $ rake cucumber:all --trace (in /Users/leonardodarioperna/Projects/frestyl/frestyl) ** Invoke cucumber:all (first_time) ** Invoke cucumber:ok (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:abort_if_pending_migrations ** Execute db:test:prepare ** Invoke db:test:load (first_time) ** Invoke db:test:purge (first_time) ** Invoke environment ** Execute db:test:purge ** Execute db:test:load ** Invoke db:schema:load (first_time) ** Invoke environment ** Execute db:schema:load ** Execute cucumber:ok /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I "/Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/lib:lib" "/Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/cucumber" --profile default cucumber.yml was not found. Please refer to cucumber's documentation on defining profiles in cucumber.yml. You must define a 'default' profile to use the cucumber command without any arguments. Type 'cucumber --help' for usage. rake aborted! Command failed with status (1): [/System/Library/Frameworks/Ruby.framework/...] /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:995:in `sh' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:in `call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:in `sh' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1094:in `sh' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1029:in `ruby' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1094:in `ruby' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/lib/cucumber/rake/task.rb:68:in `run' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/lib/cucumber/rake/task.rb:138:in `define_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 WHY? but the command: $ cucumber still works Any idea?

    Read the article

  • openfire cannot subscribe gmail user

    - by cometta
    i trying to add gmail user with my local openfire, but get error below. I think something wrong with dns srv. can anyone suggest how to troubleshoot? </error> </presence> at org.jivesoftware.openfire.spi.RoutingTableImpl.routePacket(RoutingTableImpl.java:217) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.returnErrorToSender(OutgoingSessionPromise.java:285) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.run(OutgoingSessionPromise.java:204) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:651) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676) at java.lang.Thread.run(Thread.java:613) 2010.04.25 23:30:57 Error returning error to sender. Original packet: <presence id="lBI4K-24" to="[email protected]" type="subscribe" from="[email protected]"/> org.jivesoftware.openfire.PacketException: Cannot route packet of type IQ or Presence to bare JID: <presence id="lBI4K-24" to="[email protected]" from="[email protected]" type="error"> <error code="404" type="cancel"> <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> </error> </presence> at org.jivesoftware.openfire.spi.RoutingTableImpl.routePacket(RoutingTableImpl.java:217) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.returnErrorToSender(OutgoingSessionPromise.java:285) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.run(OutgoingSessionPromise.java:219) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:651) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676) at java.lang.Thread.run(Thread.java:613) 2010.04.25 23:31:56 Error returning error to sender. Original packet: <presence id="gmEsS-26" to="[email protected]" type="subscribe" from="[email protected]"/> org.jivesoftware.openfire.PacketException: Cannot route packet of type IQ or Presence to bare JID: <presence id="gmEsS-26" to="[email protected]" from="[email protected]" type="error"> <error code="404" type="cancel"> <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> </error> </presence> at org.jivesoftware.openfire.spi.RoutingTableImpl.routePacket(RoutingTableImpl.java:217) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.returnErrorToSender(OutgoingSessionPromise.java:285) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.run(OutgoingSessionPromise.java:219) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:651) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676) at java.lang.Thread.run(Thread.java:613) 2010.04.25 23:31:56 Error returning error to sender. Original packet: <presence id="gmEsS-27" to="[email protected]" type="subscribe" from="[email protected]"/> org.jivesoftware.openfire.PacketException: Cannot route packet of type IQ or Presence to bare JID: <presence id="gmEsS-27" to="[email protected]" from="[email protected]" type="error"> <error code="404" type="cancel"> <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> </error> </presence> at org.jivesoftware.openfire.spi.RoutingTableImpl.routePacket(RoutingTableImpl.java:217) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.returnErrorToSender(OutgoingSessionPromise.java:285) at org.jivesoftware.openfire.server.OutgoingSessionPromise$PacketsProcessor.run(OutgoingSessionPromise.java:204) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:651) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676) at java.lang.Thread.run(Thread.java:613)

    Read the article

  • ssh permission denied

    - by Gitmo
    I am trying to ssh into a remote machine and I get the following debug messages: debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.x.xx [xxx.xxx.xx.x] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/hadoop/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/hadoop/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 128/256 debug2: bits set: 511/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/hadoop/.ssh/known_hosts debug3: check_host_in_hostfile: match line 20 debug1: Host '192.168.1.63' is known and matches the RSA host key. debug1: Found key in /home/hadoop/.ssh/known_hosts:20 debug2: bits set: 511/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/hadoop/.ssh/id_rsa (0x241c110) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /home/hadoop/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,password). What seems to be the problem?? I have tried everything, this is driving me nuts.

    Read the article

  • Strange intermittent SQL connection error, fixes on reboot, comes back after 3-5 days (ASP.NET)

    - by Ryan
    For some reason every 3-5 days our web app loses the ability to open a connection to the db with the following error, the strange thing is that all we have to do is reboot the container (it is a VPS) and it is restored to normal functionality. Then a few days later or so it happens again. Has anyone ever had such a problem? I have noticed a lot of ANONYMOUS LOGONs in the security log in the middle of the night from our AD server which is strange, and also some from an IP in Amsterdam. I am not sure how to tell what exactly they mean or if it is related or not. Server Error in '/ntsb' Application. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Source Error: Line 11: Line 12: Line 13: dbConnection.Open() Line 14: Line 15: Source File: C:\Inetpub\wwwroot\includes\connection.ascx Line: 13 Stack Trace: [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +248 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +245 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +475 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +260 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +2445449 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +2445144 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +354 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +703 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +54 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +2414696 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +92 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +1657 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +84 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +1645687 System.Data.SqlClient.SqlConnection.Open() +258 ASP.includes_connection_ascx.getConnection() in C:\Inetpub\wwwroot\includes\connection.ascx:13 ASP.default_aspx.Page_Load(Object sender, EventArgs e) in C:\Inetpub\wwwroot\Default.aspx:16 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428 Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053

    Read the article

  • New project created with Flex Mojo's archetype throws Cannot Find Parent Project-Maven Exception

    - by ignorant
    This is probably a silly question but I just cant seem to figure out. I'm completely new to flex and maven. Maven 2.2.1: Maven 2.2.1 unzipped,M2_HOME set and repository altered to point to different drive location in settings.xml Flex 4.0: Installed Created a multi-modular webapp project using flexmojo: mvn archetype:generate -DarchetypeRepository=http://repository.sonatype.org/content/groups/flexgroup -DarchetypeGroupId=org.sonatype.flexmojos -DarchetypeArtifactId=flexmojos-archetypes-modular-webapp -DarchetypeVersion=RELEASE with following options groupId=com.test artifactId=test version=1.0-snapshot package=com.tests * Creates * test |-- pom.xml |--swc -pom.xml |--swf -pom.xml `--war -pom.xml Parent pom has swc, swf, war as modules. Dependency is war-swf-swc. With parent artifactId of swf, swc, war set to swf, swc, test respectively. On executing mvn on test folder(for that matter clean or anything) I get this following error. G:\Projects\testmvn -e + Error stacktraces are turned on. [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/com/test/swc/1.0-snapshot/swc-1.0-snapshot.pom [INFO] Unable to find resource 'com.test:swc:pom:1.0-snapshot' in repository central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. GroupId: com.test ArtifactId: swc Version: 1.0-snapshot Reason: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.reactor.MavenExecutionException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:508) at org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200) at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:604) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:487) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:560) at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:391) ... 12 more Caused by: org.apache.maven.project.ProjectBuildingException: POM 'com.test:swc' not found in repository: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) for project com.test:swc at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:605) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1392) ... 19 more Caused by: org.apache.maven.artifact.resolver.ArtifactNotFoundException: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 20 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 22 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Tue Jun 15 19:22:15 GMT+02:00 2010 [INFO] Final Memory: 1M/2M [INFO] ------------------------------------------------------------------------ Looks like its trying to download the project from maven's central repository instead of building it. What am I missing?

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • ASP.NET MVC 3 - New Features

    - by imran_ku07
    Introduction:          ASP.NET MVC 3 just released by ASP.NET MVC team which includes some new features, some changes, some improvements and bug fixes. In this article, I will show you the new features of ASP.NET MVC 3. This will help you to get started using the new features of ASP.NET MVC 3. Full details of this announcement is available at Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix.   Description:       New Razor View Engine:              Razor view engine is one of the most coolest new feature in ASP.NET MVC 3. Razor is speeding things up just a little bit more. It is much smaller and lighter in size. Also it is very easy to learn. You can say ' write less, do more '. You can get start and learn more about Razor at Introducing “Razor” – a new view engine for ASP.NET.         Granular Request Validation:             Another biggest new feature in ASP.NET MVC 3 is Granular Request Validation. Default request validator will throw an exception when he see < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of querystring, posted form, headers and cookie collection. In previous versions of ASP.NET MVC, you can control request validation using ValidateInputAttriubte. In ASP.NET MVC 3 you can control request validation at Model level by annotating your model properties with a new attribute called AllowHtmlAttribute. For details see Granular Request Validation in ASP.NET MVC 3.       Sessionless Controller Support:             Sessionless Controller is another great new feature in ASP.NET MVC 3. With Sessionless Controller you can easily control your session behavior for controllers. For example, you can make your HomeController's Session as Disabled or ReadOnly, allowing concurrent request execution for single user. For details see Concurrent Requests In ASP.NET MVC and HowTo: Sessionless Controller in MVC3 – what & and why?.       Unobtrusive Ajax and  Unobtrusive Client Side Validation is Supported:             Another cool new feature in ASP.NET MVC 3 is support for Unobtrusive Ajax and Unobtrusive Client Side Validation.  This feature allows separation of responsibilities within your web application by separating your html with your script. For details see Unobtrusive Ajax in ASP.NET MVC 3 and Unobtrusive Client Validation in ASP.NET MVC 3.       Dependency Resolver:             Dependency Resolver is another great feature of ASP.NET MVC 3. It allows you to register a dependency resolver that will be used by the framework. With this approach your application will not become tightly coupled and the dependency will be injected at run time. For details see ASP.NET MVC 3 Service Location.       New Helper Methods:             ASP.NET MVC 3 includes some helper methods of ASP.NET Web Pages technology that are used for common functionality. These helper methods includes: Chart, Crypto, WebGrid, WebImage and WebMail. For details of these helper methods, please see ASP.NET MVC 3 Release Notes. For using other helper methods of ASP.NET Web Pages see Using ASP.NET Web Pages Helpers in ASP.NET MVC.       Child Action Output Caching:             ASP.NET MVC 3 also includes another feature called Child Action Output Caching. This allows you to cache only a portion of the response when you are using Html.RenderAction or Html.Action. This cache can be varied by action name, action method signature and action method parameter values. For details see this.       RemoteAttribute:             ASP.NET MVC 3 allows you to validate a form field by making a remote server call through Ajax. This makes it very easy to perform remote validation at client side and quickly give the feedback to the user. For details see How to: Implement Remote Validation in ASP.NET MVC.       CompareAttribute:             ASP.NET MVC 3 includes a new validation attribute called CompareAttribute. CompareAttribute allows you to compare the values of two different properties of a model. For details see CompareAttribute in ASP.NET MVC 3.       Miscellaneous New Features:                    ASP.NET MVC 2 includes FormValueProvider, QueryStringValueProvider, RouteDataValueProvider and HttpFileCollectionValueProvider. ASP.NET MVC 3 adds two additional value providers, ChildActionValueProvider and JsonValueProvider(JsonValueProvider is not physically exist).  ChildActionValueProvider is used when you issue a child request using Html.Action and/or Html.RenderAction methods, so that your explicit parameter values in Html.Action and/or Html.RenderAction will always take precedence over other value providers. JsonValueProvider is used to model bind JSON data. For details see Sending JSON to an ASP.NET MVC Action Method Argument.           In ASP.NET MVC 3, a new property named FileExtensions added to the VirtualPathProviderViewEngine class. This property is used when looking up a view by path (and not by name), so that only views with a file extension contained in the list specified by this new property is considered. For details see VirtualPathProviderViewEngine.FileExtensions Property .           ASP.NET MVC 3 installation package also includes the NuGet Package Manager which will be automatically installed when you install ASP.NET MVC 3. NuGet makes it easy to install and update open source libraries and tools in Visual Studio. See this for details.           In ASP.NET MVC 2, client side validation will not trigger for overridden model properties. For example, if have you a Model that contains some overridden properties then client side validation will not trigger for overridden properties in ASP.NET MVC 2 but client side validation will work for overridden properties in ASP.NET MVC 3.           Client side validation is not supported for StringLengthAttribute.MinimumLength property in ASP.NET MVC 2. In ASP.NET MVC 3 client side validation will work for StringLengthAttribute.MinimumLength property.           ASP.NET MVC 3 includes new action results like HttpUnauthorizedResult, HttpNotFoundResult and HttpStatusCodeResult.           ASP.NET MVC 3 includes some new overloads of LabelFor and LabelForModel methods. For details see LabelExtensions.LabelForModel and LabelExtensions.LabelFor.           In ASP.NET MVC 3, IControllerFactory includes a new method GetControllerSessionBehavior. This method is used to get controller's session behavior. For details see IControllerFactory.GetControllerSessionBehavior Method.           In ASP.NET MVC 3, Controller class includes a new property ViewBag which is of type dynamic. This property allows you to access ViewData Dictionary using C # 4.0 dynamic features. For details see ControllerBase.ViewBag Property.           ModelMetadata includes a property AdditionalValues which is of type Dictionary. In ASP.NET MVC 3 you can populate this property using AdditionalMetadataAttribute. For details see AdditionalMetadataAttribute Class.           In ASP.NET MVC 3 you can also use MvcScaffolding to scaffold your Views and Controller. For details see Scaffold your ASP.NET MVC 3 project with the MvcScaffolding package.           If you want to convert your application from ASP.NET MVC 2 to ASP.NET MVC 3 then there is an excellent tool that automatically converts ASP.NET MVC 2 application to ASP.NET MVC 3 application. For details see MVC 3 Project Upgrade Tool.           In ASP.NET MVC 2 DisplayAttribute is not supported but in ASP.NET MVC 3 DisplayAttribute will work properly.           ASP.NET MVC 3 also support model level validation via the new IValidatableObject interface.           ASP.NET MVC 3 includes a new helper method Html.Raw. This helper method allows you to display unencoded HTML.     Summary:          In this article I showed you the new features of ASP.NET MVC 3. This will help you a lot when you start using ASP MVC 3. I also provide you the links where you can find further details. Hopefully you will enjoy this article too.  

    Read the article

  • System.Web.Services.Protocols.SoapException - Security perssmission issue

    - by Hiscal
    Can any one help me to resolve this error.My website hosted on shared environment. Server Error in '/' Application. System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace --- Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.Services.Protocols.SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace --- Source Error: Line 446: Line 447: oPayment.PayCurrency = "USD"; Line 448: oResult = oService.BookGolfCourse(oGolfItem, oGolfplayer, oPayment); Line 449: Response.Write(oResult.RetMsg); Line 450: Source File: c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs Line: 448 Stack Trace: [SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace ---] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431766 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 mygolf.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) +80 birdiethis.web.test.BookClub() in c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs:448 birdiethis.web.test.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs:28 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Database operations through SQL: Database Restore ...

    - by zc0000
    If using sql to perform a database restoring , care the sql-codes very well. Any trivial mistake may prevent a successful execution. Cases are listed here based on simple experiments. with operation: MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' If logical file name not correctly set , following error is obtained: Logical file 'FILE_NAME' is not part of database 'DATABASE_NAME'. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally. To be continue...

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • SQL SERVER – Convert IN to EXISTS – Performance Talk

    - by pinaldave
    In recent training one of the attendee asked if I can show simple method to convert IN clause to EXISTS clause. Here is the simple example. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. Click on below image to see the execution plan. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Does IntelliJ-Idea support Groovy 2.x?

    - by Freewind
    I just tried IntelliJ-Idea 11.x and 12.x (EPA), but when I use Groovy 2.0.1 or 2.0.5, the code can't be run and there are some errors out there. The Groovy plugin of idea has little information about which version of Groovy has been supported. Does idea support Groovy 2.x? I want to try the new @TypeChecked annotation of Groovy 2. UPDATE My groovy code: class X { def hello() { println("hello, groovy") } def static main(String[] args) { new X().hello() } } It uses groovy 2.0.5: And the error thrown: E:\java\jdk1.6.0_29_x64\bin\java -Didea.launcher.port=7532 "-Didea.launcher.bin.path=E:\java\IntelliJ IDEA 11.1.4\bin" -Dfile.encoding=UTF-8 -classpath "E:\java\jdk1.6.0_29_x64\jre\lib\charsets.jar;E:\java\jdk1.6.0_29_x64\jre\lib\deploy.jar;E:\java\jdk1.6.0_29_x64\jre\lib\javaws.jar;E:\java\jdk1.6.0_29_x64\jre\lib\jce.jar;E:\java\jdk1.6.0_29_x64\jre\lib\jsse.jar;E:\java\jdk1.6.0_29_x64\jre\lib\management-agent.jar;E:\java\jdk1.6.0_29_x64\jre\lib\plugin.jar;E:\java\jdk1.6.0_29_x64\jre\lib\resources.jar;E:\java\jdk1.6.0_29_x64\jre\lib\rt.jar;E:\java\jdk1.6.0_29_x64\jre\lib\ext\dcevm.jar;E:\java\jdk1.6.0_29_x64\jre\lib\ext\dnsns.jar;E:\java\jdk1.6.0_29_x64\jre\lib\ext\localedata.jar;E:\java\jdk1.6.0_29_x64\jre\lib\ext\sunjce_provider.jar;E:\WORKSPACE\TestGroovy2\out\production\TestGroovy2;E:\java\groovy-2.0.5\lib\ant-1.8.4.jar;E:\java\groovy-2.0.5\lib\ant-antlr-1.8.4.jar;E:\java\groovy-2.0.5\lib\ant-junit-1.8.4.jar;E:\java\groovy-2.0.5\lib\ant-launcher-1.8.4.jar;E:\java\groovy-2.0.5\lib\antlr-2.7.7.jar;E:\java\groovy-2.0.5\lib\asm-4.0.jar;E:\java\groovy-2.0.5\lib\asm-analysis-4.0.jar;E:\java\groovy-2.0.5\lib\asm-commons-4.0.jar;E:\java\groovy-2.0.5\lib\asm-tree-4.0.jar;E:\java\groovy-2.0.5\lib\asm-util-4.0.jar;E:\java\groovy-2.0.5\lib\bsf-2.4.0.jar;E:\java\groovy-2.0.5\lib\commons-cli-1.2.jar;E:\java\groovy-2.0.5\lib\commons-logging-1.1.1.jar;E:\java\groovy-2.0.5\lib\gpars-1.0-beta-3.jar;E:\java\groovy-2.0.5\lib\groovy-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-ant-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-bsf-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-console-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-docgenerator-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-groovydoc-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-groovysh-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-jmx-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-json-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-jsr223-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-servlet-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-sql-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-swing-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-templates-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-test-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-testng-2.0.5.jar;E:\java\groovy-2.0.5\lib\groovy-xml-2.0.5.jar;E:\java\groovy-2.0.5\lib\hamcrest-core-1.1.jar;E:\java\groovy-2.0.5\lib\ivy-2.2.0.jar;E:\java\groovy-2.0.5\lib\jansi-1.6.jar;E:\java\groovy-2.0.5\lib\jcommander-1.12.jar;E:\java\groovy-2.0.5\lib\jline-1.0.jar;E:\java\groovy-2.0.5\lib\jsp-api-2.0.jar;E:\java\groovy-2.0.5\lib\jsr166y-1.7.0.jar;E:\java\groovy-2.0.5\lib\junit-4.10.jar;E:\java\groovy-2.0.5\lib\qdox-1.12.jar;E:\java\groovy-2.0.5\lib\servlet-api-2.4.jar;E:\java\groovy-2.0.5\lib\testng-6.5.2.jar;E:\java\groovy-2.0.5\lib\xmlpull-1.1.3.1.jar;E:\java\groovy-2.0.5\lib\xstream-1.4.2.jar;E:\java\IntelliJ IDEA 11.1.4\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain X Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.objectweb.asm.MethodVisitor, but class was expected at org.codehaus.groovy.runtime.callsite.CallSiteGenerator.genConstructor(CallSiteGenerator.java:141) at org.codehaus.groovy.runtime.callsite.CallSiteGenerator.genPogoMetaMethodSite(CallSiteGenerator.java:162) at org.codehaus.groovy.runtime.callsite.CallSiteGenerator.compilePogoMethod(CallSiteGenerator.java:215) at org.codehaus.groovy.reflection.CachedMethod.createPogoMetaMethodSite(CachedMethod.java:228) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.createCachedMethodSite(PogoMetaMethodSite.java:212) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.createPogoMetaMethodSite(PogoMetaMethodSite.java:188) at groovy.lang.MetaClassImpl.createPogoCallSite(MetaClassImpl.java:3035) at org.codehaus.groovy.runtime.callsite.CallSiteArray.createPogoSite(CallSiteArray.java:147) at org.codehaus.groovy.runtime.callsite.CallSiteArray.createCallSite(CallSiteArray.java:161) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112) at X.main(sta.groovy:6) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Process finished with exit code 1

    Read the article

  • LLBLGen Pro v3.5 has been released!

    - by FransBouma
    Last weekend we released LLBLGen Pro v3.5! Below the list of what's new in this release. Of course, not everything is on this list, like the large amount of work we put in refactoring the runtime framework. The refactoring was necessary because our framework has two paradigms which are added to the framework at a different time, and from a design perspective in the wrong order (the paradigm we added first, SelfServicing, should have been built on top of Adapter, the other paradigm, which was added more than a year after the first released version). The refactoring made sure the framework re-uses more code across the two paradigms (they already shared a lot of code) and is better prepared for the future. We're not done yet, but refactoring a massive framework like ours without breaking interfaces and existing applications is ... a bit of a challenge ;) To celebrate the release of v3.5, we give every customer a 30% discount! Use the coupon code NR1ORM with your order :) The full list of what's new: Designer Rule based .NET Attribute definitions. It's now possible to specify a rule using fine-grained expressions with an attribute definition to define which elements of a given type will receive the attribute definition. Rules can be assigned to attribute definitions on the project level, to make it even easier to define attribute definitions in bulk for many elements in the project. More information... Revamped Project Settings dialog. Multiple project related properties and settings dialogs have been merged into a single dialog called Project Settings, which makes it easier to configure the various settings related to project elements. It also makes it easier to find features previously not used  by many (e.g. type conversions) More information... Home tab with Quick Start Guides. To make new users feel right at home, we added a home tab with quick start guides which guide you through four main use cases of the designer. System Type Converters. Many common conversions have been implemented by default in system type converters so users don't have to develop their own type converters anymore for these type conversions. Bulk Element Setting Manipulator. To change setting values for multiple project elements, it was a little cumbersome to do that without a lot of clicking and opening various editors. This dialog makes changing settings for multiple elements very easy. EDMX Importer. It's now possible to import entity model data information from an existing Entity Framework EDMX file. Other changes and fixes See for the full list of changes and fixes the online documentation. LLBLGen Pro Runtime Framework WCF Data Services (OData) support has been added. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF Data Services application using the VS.NET tools for WCF Data Services. WCF Data Services is a Microsoft technology for .NET 4 to expose your domain model using OData. More information... New query specification and execution API: QuerySpec. QuerySpec is our new query specification and execution API as an alternative to Linq and our more low-level API. It's build, like our Linq provider, on top of our lower-level API. More information... SQL Server 2012 support. The SQL Server DQE allows paging using the new SQL Server 2012 style. More information... System Type converters. For a common set of types the LLBLGen Pro runtime framework contains built-in type conversions so you don't need to write your own type converters anymore. Public/NonPublic property support. It's now possible to mark a field / navigator as non-public which is reflected in the runtime framework as an internal/friend property instead of a public property. This way you can hide properties from the public interface of a generated class and still access it through code added to the generated code base. FULL JOIN support. It's now possible to perform FULL JOIN joins using the native query api and QuerySpec. It's left to the developer to check whether the used target database supports FULL (OUTER) JOINs. Using a FULL JOIN with entity fetches is not recommended, and should only be used when both participants in the join aren't the target of the fetch. Dependency Injection Tracing. It's now possible to enable tracing on dependency injection. Enable tracing at level '4' on the traceswitch 'ORMGeneral'. This will emit trace information about which instance of which type got an instance of type T injected into property P. Entity Instances in projections in Linq. It's now possible to return an entity instance in a custom Linq projection. It's now also possible to pass this instance to a method inside the query projection. Inheritance fully supported in this construct. Entity Framework support The Entity Framework has been updated in the recent year with code-first support and a new simpler context api: DbContext (with DbSet). The amount of code to generate is smaller and the context simpler. LLBLGen Pro v3.5 comes with support for DbContext and DbSet and generates code which utilizes these new classes. NHibernate support NHibernate v3.2+ built-in proxy factory factory support. By default the built-in ProxyFactoryFactory is selected. FluentNHibernate Session Manager uses 1.2 syntax. Fluent NHibernate mappings generate a SessionManager which uses the v1.2 syntax for the ProxyFactoryFactory location Optionally emit schema / catalog name in mappings Two settings have been added which allow the user to control whether the catalog name and/or schema name as known in the project in the designer is emitted into the mappings.

    Read the article

  • LINQ for SQL Developers and DBA’s

    - by AtulThakor
    Firstly I’d just like to thank the guys who organise the SQL Server User Group (Martin/Tony/Chris) and for giving me the opportunity to speak at the recent event. Sorry about the slides taking so long but here they are along with some extra information. Firstly the demo’s were all done using LINQPad 4.0 which can be downloaded here: http://www.linqpad.net/ There are 2 versions 3.5/4.0 With 3.5 you should be able to replicate the problem I showed where a query using a parameter which is X characters long would create a different execution plan to a query which uses a parameter which is Y characters long, otherwise I would just use 4.0 The sample database used is AdventureWorksLT2008 which can be downloaded from here: http://msftdbprodsamples.codeplex.com/releases/view/37109 The scripts have been named so that you can select the appropriate way to run them i.e.: C# expression / C#statement, each script can be run individually be highlighting the query and clicking the play symbol or hitting F5. Scripts and Slides: http://sqlblogcasts.com/blogs/atulthakor/An%20Introduction%20to%20LINQ.zip Please don't hesitate in sending any questions via email/twitter, I’ll try my best to answer your questions! Thanks, Atul

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >