Search Results

Search found 87948 results on 3518 pages for 'application name'.

Page 368/3518 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Converting LDAP from Tomcat to GlassFish

    - by Jon
    Hi, I have a simple web-app that is developed in Netbeans(6.8) and works fine in Tomcat(6) using LDAP(Active Directory). I need to convert this to an EE (JSF2), so I am moving from Tomcat to GlassFish(v3). I have changed the web files to xhtml and configured the xml files. However, I cannot get the GlassFish LDAP configuration to authenticate. I am attaching my old web.xml and server.xml (from Tomcat) snippets and the portions of the new web.xml, sun-web.xml, and the GlassFish configuration. If anyone can help me figure out where I am missing the piece that will allow a user to be authenticated, I would appreciate it. (btw, I am not using roles, just authenticating against the LDAP db is good enought.) As it is right now, my app will prompt me to enter a user when I try to access a file in the 'protected' area and the GlassFish server throws an exception when it fails to authenticate. Because it works under Tomcat, I know I have the right information, I just don't know how to format it to get GlassFish to pass it along. Thanks. TOMCAT FILES: - Tomcat server.xml: web.xml: <web-resource-collection> <web-resource-name>Protected Area</web-resource-name> <description>Authentication Required</description> <url-pattern>/faces/protected/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> * BASIC Please enter your user name and password: GLASSFISH FILES: (I enabled the Security Manager on the Security panel, set the Default Realm to 'LDAPRealm', and added "-Djava.naming.referral=follow" JVM options.) - domain.xml: <auth-realm name="certificate" classname="com.sun.enterprise.security.auth.realm.certificate.CertificateRealm" /> <auth-realm classname="com.sun.enterprise.security.auth.realm.ldap.LDAPRealm" name="LdapRealm"> <property description="()" name="search-bind-password" value="xxxxxxxx" /> <property description="()" name="search-bind-dn" value="cn=xxxxxxxx,ou=Administrators,ou=Information Technology,ou=ITTS,ou=Administrative,ou=xxx,dc=xxxxxx,dc=xxx" /> <property name="jaas-context" value="ldapRealm" /> <property name="base-dn" value="ou=xxx,dc=xxxxxx,dc=xxx" /> <property name="directory" value="ldap://xxxx.xxxxxx.xxx:389" /> <property name="search-filter" value="(&amp;(objectClass=user)(sAMAccountName=%s))" /> </auth-realm> -web.xml: <security-constraint> <display-name>protected</display-name> <web-resource-collection> <web-resource-name>ProtectedArea</web-resource-name> <description/> <url-pattern>/faces/protected/*</url-pattern> </web-resource-collection> <auth-constraint> <description/> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <description/> <role-name>*</role-name> </security-role> <login-config> <auth-method>FORM</auth-method> <realm-name>LDAPRealm</realm-name> <form-login-config> <form-login-page>/faces/login.xhtml</form-login-page> <form-error-page>/faces/loginError.xhtml</form-error-page> </form-login-config> </login-config> sun-web.xml: Here is the exception that it throws: SEVERE: SEC1113: Exception in LdapRealm when trying to authenticate user. javax.security.auth.login.LoginException: javax.security.auth.login.LoginException: User yyyyyyy not found. at com.sun.enterprise.security.auth.realm.ldap.LDAPRealm.findAndBind(LDAPRealm.java:450)

    Read the article

  • XSD and plain text

    - by Paul Knopf
    I have a rest/xml service that gives me the following... <verse-unit unit-id="38009001"> <marker class="begin-verse" mid="v38009001"/> <begin-chapter num="9"/><heading>Judgment on Israel&apos;s Enemies</heading> <begin-block-indent/> <begin-paragraph class="line-group"/> <begin-line/><verse-num begin-chapter="9">1</verse-num>The burden of the word of the <span class="divine-name">Lord</span> is against the land of Hadrach<end-line class="br"/> <begin-line class="indent"/>and Damascus is its resting place.<end-line class="br"/> <begin-line/>For the <span class="divine-name">Lord</span> has an eye on mankind<end-line class="br"/> <begin-line class="indent"/>and on all the tribes of Israel,<footnote id="f1"> A slight emendation yields <i> For to the <span class="divine-name">Lord</span> belongs the capital of Syria and all the tribes of Israel </i> </footnote><end-line class="br"/> </verse-unit> I used visual studio to generate a schema from this and used XSD.EXE to generate classes that I can use to deserialize this mess into programmable stuff. I got everything to work and it is deserialized perfectly (almost). The problem I have is with the random text mixed throughout the child nodes. The generated verse-unit objects gives me a list of objects (begin-line, begin-block-indent, etc), and also another list of string objects that represent the bits of string throughout the xml. Here is my schema <xs:element maxOccurs="unbounded" name="verse-unit"> <xs:complexType mixed="true"> <xs:sequence> <xs:choice maxOccurs="unbounded"> <xs:element name="marker"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="required" /> <xs:attribute name="mid" type="xs:string" use="required" /> </xs:complexType> </xs:element> <xs:element name="begin-chapter"> <xs:complexType> <xs:attribute name="num" type="xs:unsignedByte" use="required" /> </xs:complexType> </xs:element> <xs:element name="heading"> <xs:complexType mixed="true"> <xs:sequence minOccurs="0"> <xs:element name="span"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="class" type="xs:string" use="required" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="begin-block-indent" /> <xs:element name="begin-paragraph"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="required" /> </xs:complexType> </xs:element> <xs:element name="begin-line"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="optional" /> </xs:complexType> </xs:element> <xs:element name="verse-num"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:unsignedByte"> <xs:attribute name="begin-chapter" type="xs:unsignedByte" use="optional" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element name="end-line"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="optional" /> </xs:complexType> </xs:element> <xs:element name="end-paragraph" /> <xs:element name="end-block-indent" /> <xs:element name="end-chapter" /> </xs:choice> </xs:sequence> <xs:attribute name="unit-id" type="xs:unsignedInt" use="required" /> </xs:complexType> </xs:element> WHAT I NEED IS THIS. I need the random text that is NOT surrounded by an xml node to be represented by an object so I know the order that everything is in. I know this is complicated, so let me try to simplify it. <field name="test_field_0"> Some text I'm sure you don't want. <subfield>Some text.</subfield> More text you don't want. </field> I need the xsd to generate a field object with items that can have either a text object, or a subfield object. I need to no where the random text is within the child nodes.

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • What is New in ASP.NET 4.0 Code Access Security

    - by Xiaohong
    ASP.NET Code Access Security (CAS) is a feature that helps protect server applications on hosting multiple Web sites, ASP.NET lets you assign a configurable trust level that corresponds to a predefined set of permissions. ASP.NET has predefined ASP.NET Trust Levels and Policy Files that you can assign to applications, you also can assign custom trust level and policy files. Most web hosting companies run ASP.NET applications in Medium Trust to prevent that one website affect or harm another site etc. As .NET Framework's Code Access Security model has evolved, ASP.NET 4.0 Code Access Security also has introduced several changes and improvements. The main change in ASP.NET 4.0 CAS In ASP.NET v4.0 partial trust applications, application domain can have a default partial trust permission set as opposed to being full-trust, the permission set name is defined in the <trust /> new attribute permissionSetName that is used to initialize the application domain . By default, the PermissionSetName attribute value is "ASP.Net" which is the name of the permission set you can find in all predefined partial trust configuration files. <trust level="Something" permissionSetName="ASP.Net" /> This is ASP.NET 4.0 new CAS model. For compatibility ASP.NET 4.0 also support legacy CAS model where application domain still has full trust permission set. You can specify new legacyCasModel attribute on the <trust /> element to indicate whether the legacy CAS model is enabled. By default legacyCasModel is false which means that new 4.0 CAS model is the default. <trust level="Something" legacyCasModel="true|false" /> In .Net FX 4.0 Config directory, there are two set of predefined partial trust config files for each new CAS model and legacy CAS model, trust config files with name legacy.XYZ.config are for legacy CAS model: New CAS model: Legacy CAS model: web_hightrust.config legacy.web_hightrust.config web_mediumtrust.config legacy.web_mediumtrust.config web_lowtrust.config legacy.web_lowtrust.config web_minimaltrust.config legacy.web_minimaltrust.config   The figure below shows in ASP.NET 4.0 new CAS model what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:    There also some benefits that comes with the new CAS model: You can lock down a machine by making all managed code no-execute by default (e.g. setting the MyComputer zone to have no managed execution code permissions), it should still be possible to configure ASP.NET web applications to run as either full-trust or partial trust. UNC share doesn’t require full trust with CASPOL at machine-level CAS policy. Side effect that comes with the new CAS model: processRequestInApplicationTrust attribute is deprecated  in new CAS model since application domain always has partial trust permission set in new CAS model.   In ASP.NET 4.0 legacy CAS model or ASP.NET 2.0 CAS model, even though you assign partial trust level to a application but the application domain still has full trust permission set. The figure below shows in ASP.NET 4.0 legacy CAS model (or ASP.NET 2.0 CAS model) what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:     What $AppDirUrl$, $CodeGen$, $Gac$ represents: $AppDirUrl$ The application's virtual root directory. This allows permissions to be applied to code that is located in the application's bin directory. For example, if a virtual directory is mapped to C:\YourWebApp, then $AppDirUrl$ would equate to C:\YourWebApp. $CodeGen$ The directory that contains dynamically generated assemblies (for example, the result of .aspx page compiles). This can be configured on a per application basis and defaults to %windir%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files. $CodeGen$ allows permissions to be applied to dynamically generated assemblies. $Gac$ Any assembly that is installed in the computer's global assembly cache (GAC). This allows permissions to be granted to strong named assemblies loaded from the GAC by the Web application.   The new customization of CAS Policy in ASP.NET 4.0 new CAS model 1. Define which named permission set in partial trust configuration files By default the permission set that will be assigned at application domain initialization time is the named "ASP.Net" permission set found in all predefined partial trust configuration files. However ASP.NET 4.0 allows you set PermissionSetName attribute to define which named permission set in a partial trust configuration file should be the one used to initialize an application domain. Example: add "ASP.Net_2" named permission set in partial trust configuration file: <PermissionSet class="NamedPermissionSet" version="1" Name="ASP.Net_2"> <IPermission class="FileIOPermission" version="1" Read="$AppDir$" PathDiscovery="$AppDir$" /> <IPermission class="ReflectionPermission" version="1" Flags ="RestrictedMemberAccess" /> <IPermission class="SecurityPermission " version="1" Flags ="Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /></PermissionSet> Then you can use "ASP.Net_2" named permission set for the application domain permission set: <trust level="Something" legacyCasModel="false" permissionSetName="ASP.Net_2" /> 2. Define a custom set of Full Trust Assemblies for an application By using the new fullTrustAssemblies element to configure a set of Full Trust Assemblies for an application, you can modify set of partial trust assemblies to full trust at the machine, site or application level. The configuration definition is shown below: <fullTrustAssemblies> <add assemblyName="MyAssembly" version="1.1.2.3" publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies> 3. Define <CodeGroup /> policy in partial trust configuration files ASP.NET 4.0 new CAS model will retain the ability for developers to optionally define <CodeGroup />with membership conditions and assigned permission sets. The specific restriction in ASP.NET 4.0 new CAS model though will be that the results of evaluating custom policies can only result in one of two outcomes: either an assembly is granted full trust, or an assembly is granted the partial trust permission set currently associated with the running application domain. It will not be possible to use custom policies to create additional custom partial trust permission sets. When parsing the partial trust configuration file: Any assemblies that match to code groups associated with "PermissionSet='FullTrust'" will run at full trust. Any assemblies that match to code groups associated with "PermissionSet='Nothing'" will result in a PolicyError being thrown from the CLR. This is acceptable since it provides administrators with a way to do a blanket-deny of managed code followed by selectively defining policy in a <CodeGroup /> that re-adds assemblies that would be allowed to run. Any assemblies that match to code groups associated with other permissions sets will be interpreted to mean the assembly should run at the permission set of the appdomain. This means that even though syntactically a developer could define additional "flavors" of partial trust in an ASP.NET partial trust configuration file, those "flavors" will always be ignored. Example: defines full trust in <CodeGroup /> for my strong named assemblies in partial trust config files: <CodeGroup class="FirstMatchCodeGroup" version="1" PermissionSetName="Nothing"> <IMembershipCondition    class="AllMembershipCondition"    version="1" /> <CodeGroup    class="UnionCodeGroup"    version="1"    PermissionSetName="FullTrust"    Name="My_Strong_Name"    Description="This code group grants code signed full trust. "> <IMembershipCondition      class="StrongNameMembershipCondition" version="1"       PublicKeyBlob="hex_char_representation_of_key_blob" /> </CodeGroup> <CodeGroup   class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$AppDirUrl$/*" /> </CodeGroup> <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$CodeGen$/*"   /> </CodeGroup></CodeGroup>   4. Customize CAS policy at runtime in ASP.NET 4.0 new CAS model ASP.NET 4.0 new CAS model allows to customize CAS policy at runtime by using custom HostSecurityPolicyResolver that overrides the ASP.NET code access security policy. Example: use custom host security policy resolver to resolve partial trust web application bin folder MyTrustedAssembly.dll to full trust at runtime: You can create a custom host security policy resolver and compile it to assembly MyCustomResolver.dll with strong name enabled and deploy in GAC: public class MyCustomResolver : HostSecurityPolicyResolver{ public override HostSecurityPolicyResults ResolvePolicy(Evidence evidence) { IEnumerator hostEvidence = evidence.GetHostEnumerator(); while (hostEvidence.MoveNext()) { object hostEvidenceObject = hostEvidence.Current; if (hostEvidenceObject is System.Security.Policy.Url) { string assemblyName = hostEvidenceObject.ToString(); if (assemblyName.Contains(“MyTrustedAssembly.dll”) return HostSecurityPolicyResult.FullTrust; } } //default fall-through return HostSecurityPolicyResult.DefaultPolicy; }} Because ASP.NET accesses the custom HostSecurityPolicyResolver during application domain initialization, and a custom policy resolver requires full trust, you also can add a custom policy resolver in <fullTrustAssemblies /> , or deploy in the GAC. You also need configure a custom HostSecurityPolicyResolver instance by adding the HostSecurityPolicyResolverType attribute in the <trust /> element: <trust level="Something" legacyCasModel="false" hostSecurityPolicyResolverType="MyCustomResolver, MyCustomResolver" permissionSetName="ASP.Net" />   Note: If an assembly policy define in <CodeGroup/> and also in hostSecurityPolicyResolverType, hostSecurityPolicyResolverType will win. If an assembly added in <fullTrustAssemblies/> then the assembly has full trust no matter what policy in <CodeGroup/> or in hostSecurityPolicyResolverType.   Other changes in ASP.NET 4.0 CAS Use the new transparency model introduced in .Net Framework 4.0 Change in dynamically compiled code generated assemblies by ASP.NET: In new CAS model they will be marked as security transparent level2 to use Framework 4.0 security transparent rule that means partial trust code is treated as completely Transparent and it is more strict enforcement. In legacy CAS model they will be marked as security transparent level1 to use Framework 2.0 security transparent rule for compatibility. Most of ASP.NET products runtime assemblies are also changed to be marked as security transparent level2 to switch to SecurityTransparent code by default unless SecurityCritical or SecuritySafeCritical attribute specified. You also can look at Security Changes in the .NET Framework 4 for more information about these security attributes. Support conditional APTCA If an assembly is marked with the Conditional APTCA attribute to allow partially trusted callers, and if you want to make the assembly both visible and accessible to partial-trust code in your web application, you must add a reference to the assembly in the partialTrustVisibleAssemblies section: <partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" />/partialTrustVisibleAssemblies>   Most of ASP.NET products runtime assemblies are also changed to be marked as conditional APTCA to prevent use of ASP.NET APIs in partial trust environments such as Winforms or WPF UI controls hosted in Internet Explorer.   Differences between ASP.NET new CAS model and legacy CAS model: Here list some differences between ASP.NET new CAS model and legacy CAS model ASP.NET 4.0 legacy CAS model  : Asp.net partial trust appdomains have full trust permission Multiple different permission sets in a single appdomain are allowed in ASP.NET partial trust configuration files Code groups Machine CAS policy is honored processRequestInApplicationTrust attribute is still honored    New configuration setting for legacy model: <trust level="Something" legacyCASModel="true" ></trust><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>   ASP.NET 4.0 new CAS model: ASP.NET will now run in homogeneous application domains. Only full trust or the app-domain's partial trust grant set, are allowable permission sets. It is no longer possible to define arbitrary permission sets that get assigned to different assemblies. If an application currently depends on fine-tuning the partial trust permission set using the ASP.NET partial trust configuration file, this will no longer be possible. processRequestInApplicationTrust attribute is deprecated Dynamically compiled assemblies output by ASP.NET build providers will be updated to explicitly mark assemblies as transparent. ASP.NET partial trust grant sets will be independent from any enterprise, machine, or user CAS policy levels. A simplified model for locking down web servers that only allows trusted managed web applications to run. Machine policy used to always grant full-trust to managed code (based on membership conditions) can instead be configured using the new ASP.NET 4.0 full-trust assembly configuration section. The full-trust assembly configuration section requires explicitly listing each assembly as opposed to using membership conditions. Alternatively, the membership condition(s) used in machine policy can instead be re-defined in a <CodeGroup /> within ASP.NET's partial trust configuration file to grant full-trust.   New configuration setting for new model: <trust level="Something" legacyCASModel="false" permissionSetName="ASP.Net" hostSecurityPolicyResolverType=".NET type string" ></trust><fullTrustAssemblies> <add assemblyName=”MyAssembly” version=”1.0.0.0” publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>     Hope this post is helpful to better understand the ASP.Net 4.0 CAS. Xiaohong Tang ASP.NET QA Team

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 1

    - by nikolaosk
    In this post I will be looking how ASP.Net MVC 4.0 helps us to create web solutions that target mobile devices.We all experience the magic that is the World Wide Web through mobile devices. Millions of people around the world, use tablets and smartphones to view the contents of websites,e-shops and portals.ASP.Net MVC 4.0 includes a new mobile project template and the ability to render a different set of views for different types of devices.There is a new feature that is called browser overriding which allows us to control exactly what a user is going to see from your web application regardless of what type of device he is using.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.It will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Project - > ASP.Net MVC 4 Web Application and then click OKHave a look at the picture below  2) From the available templates select Mobile Application and then click OK.Have a look at the picture below 3) When I run the application I get the mobile view of the page. I would like to show you what a typical ASP.Net MVC 4.0 application looks like. So I will create a new simple ASP.Net MVC 4.0 Web Application. When I run the application I get the normal page view.Have a look at the picture below.On the left is the mobile view and on the right the normal view. As you can see we have more or less the same content in our mobile application (log in,register) compared with the normal ASP.Net MVC 4.0 application but it is optimised for mobile devices. 4) Let me explain how and when the mobile view is selected and finally rendered.There is a feature in MVC 4.0 that is called Display Modes and with this feature the runtime will select a view.If we have 2 views e.g contact.mobile.cshtml and contact.cshtml in our application the Controller at some point will instruct the runtime to select and render a view named contact.The runtime will look at the browser making the request and will determine if it is a mobile browser or a desktop browser. So if there is a request from my IPhone Safari browser for a particular site, if there is a mobile view the MVC 4.0 will select it and render it. If there is not a mobile view, the normal view will be rendered.5) In the  ASP.Net MVC 4.0 (Internet application) I created earlier (not the first project which was a mobile one) I can run it once more and see how it looks on the browser. If I want to view it with a mobile browser I must download one emulator like Opera Mobile.You can download Opera Mobile hereWhen I run the application I get the same view in both the desktop and the mobile browser. That was to be expected. Have a look at the picture below 6) Then I create another version of the _Layout.mobile.cshtml view in the Shared folder.I simply copy and paste the _Layout.cshtml  into the same folder and then rename it to _Layout.mobile.cshtml and then just alter the contents of the _Layout.mobile.cshtml.When I run again the application I get a different view on the desktop browser and a different one on the Opera mobile browser.Have a look at the picture below ?he Controller will instruct the ASP.Net runtime to select and render a view named _Layout.mobile.cshtml when the request will come from a mobile browser.?he runtime knows that a browser is a mobile one through the ASP.Net browser capability provider. Hope it helps!!!

    Read the article

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

  • JavaScript Browser Hacks

    Recently during one of my client side scripting classes, I was trying to show my students some basic examples of JavaScript as an introduction to the language.  My first basic example was to show an alert box using JavaScript via the address bar. The student’s reaction to my browser hack example really caught me off guard in a good way. After programming with a language for close to 10 years you start to lose the "Awe Cool!" effect that new learners of a language experience when writing code. New learns of JavaScript are the reason why I created this post. Please enjoy. Note: Place JavaScript in to address bar and then press the enter key. Example 1: JavaScript Alert box displaying My name: John Doe Javascript:alert('My name: \n John Doe') ; Example 2: JavaScript alert box displaying name entered by user. javascript:alert('My name: \n ' + prompt('Enter Name','Name')) ; Example 3: JavaScript alert box displaying name entered by user, and then displays the length of the name. javascript:var name= prompt('Enter Name','Name'); alert('My name: \n ' + name); alert(name.length); If you notice, the address bar will execute JavaScript on the current page loaded in the browser using the Document Object Model (DOM). Additionally, the address bar will allow multiple lines to be executed sequentially even though all of the code is contained within one line due to the fact that the JavaScript interpreter uses the “;” to indicate where a line of ends and a new one begins. After doing a little more research on the topic of JavaScript Browser Hacks I found a few other cool JavaScript hacks which I will list below. Example 4: Make any webpage editableSource: http://www.openjason.com/2008/09/02/browser-hack-make-any-web-page-editable/ javascript:document.body.contentEditable='true'; document.designMode='on'; void 0; Example 5: CHINESE DRAGON DANCING Source: http://nzeyi.wordpress.com/2009/06/01/dwrajaxjavascript-hacks-the-secrets-of-javascript-in-the-adress-bar/ javascript:R=0;x1=0.1;y1=0.05;x2=0.25;y2=0.24;x3=1.6; y3=0.24;x4=300;y4=200;x5=300;y5=200;DI=document.links; DIL=DI.length;A=function(){for(i=0;i-DIL;i++){DI[i].style. position='absolute';DI[i].style.left=Math.sin(R*x1+i*x2+x3)*x4+ x5;DI[i].style.top=Math.cos(R*y1+i*y2+y3)*y4+y5}R++;}; setInterval('A()',5);void(0); Example 6: Reveal content stored in password protected fields javascript:(function(){var s,F,j,f,i; s = “”; F = document.forms; for(j=0; j Example 7: Force user to close browser windowSource: http://forums.digitalpoint.com/showthread.php?t=767053 javascript:while(1){alert('Restart your brower to close this box!')} Learn more about JavaScript Browser Hacks.

    Read the article

  • Integrating Twitter Into An ASP.NET Website Using OAuth

    Earlier this year I wrote an article about Twitterizer, an open-source .NET library that can be used to integrate your application with Twitter. Using Twitterizer you can allow your visitors to post tweets, view their timeline, and much more, all without leaving your website. The original article, Integrating Twitter Into An ASP.NET Website, showed how to post tweets and view a timeline to a particular Twitter account using Twitterizer 1.0. To post a tweet to a specific account, Twitterizer 1.0 uses basic authentication. Basic authentication is a very simple authentication scheme. For an application to post a tweet to JohnDoe's Twitter account, it would submit JohnDoe's username and password (along with the tweet text) to Twitter's servers. Basic authentication, while easy to implement, is not an ideal authentication scheme as it requires that the integrating application know the username(s) and password(s) of the accounts that it is connected to. Consequently, a user must share her password in order to connect her Twitter account with the application. Such password sharing is not only insecure, but it can also cause difficulties down the line if the user changes her password or decides that she no longer wants to connect her account to certain applications (but wants to remain connected to others). To remedy these issues, Twitter introduced support for OAuth, which is a simple, secure protocol for granting API access. In a nutshell, OAuth allows a user to connect an application to their Twitter account without having to share their password. Instead, the user is sent to Twitter's website where they confirm whether they want to connect to the application. Upon confirmation, Twitter generates an token that is then sent back to the application. The application then submits this token when integrating with the user's account. The token serves as proof that the user has allowed this application access to their account. (Twitter users can view what application's they're connected to and may revoke these tokens on an application-by-application basis.) In late 2009, Twitter announced that it was ending its support for basic authentication in June 2010. As a result, the code examined in Integrating Twitter Into An ASP.NET Website, which uses basic authentication, will no longer work once the cut off date is reached. The good news is that the Twitterizer version 2.0 supports OAuth. This article examines how to use Twitterizer 2.0 and OAuth from a website. Specifically, we'll see how to retrieve and display a user's latest tweets and how to post a tweet from an ASP.NET page. Read on to learn more! Read More >

    Read the article

  • Integrating Twitter Into An ASP.NET Website Using OAuth

    Earlier this year I wrote an article about Twitterizer, an open-source .NET library that can be used to integrate your application with Twitter. Using Twitterizer you can allow your visitors to post tweets, view their timeline, and much more, all without leaving your website. The original article, Integrating Twitter Into An ASP.NET Website, showed how to post tweets and view a timeline to a particular Twitter account using Twitterizer 1.0. To post a tweet to a specific account, Twitterizer 1.0 uses basic authentication. Basic authentication is a very simple authentication scheme. For an application to post a tweet to JohnDoe's Twitter account, it would submit JohnDoe's username and password (along with the tweet text) to Twitter's servers. Basic authentication, while easy to implement, is not an ideal authentication scheme as it requires that the integrating application know the username(s) and password(s) of the accounts that it is connected to. Consequently, a user must share her password in order to connect her Twitter account with the application. Such password sharing is not only insecure, but it can also cause difficulties down the line if the user changes her password or decides that she no longer wants to connect her account to certain applications (but wants to remain connected to others). To remedy these issues, Twitter introduced support for OAuth, which is a simple, secure protocol for granting API access. In a nutshell, OAuth allows a user to connect an application to their Twitter account without having to share their password. Instead, the user is sent to Twitter's website where they confirm whether they want to connect to the application. Upon confirmation, Twitter generates an token that is then sent back to the application. The application then submits this token when integrating with the user's account. The token serves as proof that the user has allowed this application access to their account. (Twitter users can view what application's they're connected to and may revoke these tokens on an application-by-application basis.) In late 2009, Twitter announced that it was ending its support for basic authentication in June 2010. As a result, the code examined in Integrating Twitter Into An ASP.NET Website, which uses basic authentication, will no longer work once the cut off date is reached. The good news is that the Twitterizer version 2.0 supports OAuth. This article examines how to use Twitterizer 2.0 and OAuth from a website. Specifically, we'll see how to retrieve and display a user's latest tweets and how to post a tweet from an ASP.NET page. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • hyperv vss writer unexpected error

    - by Eric Martin
    I am using Mozy Pro to backup our Hyperv servers. I am doing this without any issues on a 2nd server but this box hasn't backed up sucessfully yet. I was told by the support tech at Mozy to type: vssadmin list providers >c:\providers.txt vssadmin list writers >c:\writers.txt Writers.txt: vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2013 Microsoft Corp. Writer name: 'Task Scheduler Writer' Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124} Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b} State: [1] Stable Last error: No error Writer name: 'VSS Metadata Store Writer' Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06} Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93} State: [1] Stable Last error: No error Writer name: 'Performance Counters Writer' Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2} Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381} State: [1] Stable Last error: No error Writer name: 'System Writer' Writer Id: {e8132975-6f93-4464-a53e-1050253ae220} Writer Instance Id: {506e7d9c-ded3-4edf-824a-4dd9af7f7538} State: [1] Stable Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {1de438e4-09de-487c-9ea8-eeafbe3fd210} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {511d23d9-4cbb-400f-b739-e6e0a8ecdbee} State: [1] Stable Last error: No error Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {32f41185-2b20-41ff-a7aa-92c262f578cd} State: [1] Stable Last error: Unexpected error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {fa328ece-623f-43cc-9888-e897e108c40e} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {7b582861-7f7f-4c10-adb1-5106bcab3902} State: [1] Stable Last error: No error Writer name: 'WMI Writer' Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0} Writer Instance Id: {d2f73a0f-c19a-44cc-bcd2-6c84ac6e516b} State: [1] Stable Last error: No error Writer name: 'MSMQ Writer (MSMQ)' Writer Id: {7e47b561-971a-46e6-96b9-696eeaa53b2a} Writer Instance Id: {95ea6efc-c00c-47ca-90d1-28fbe6d7a8d0} State: [1] Stable Last error: No error Writer name: 'IIS Config Writer' Writer Id: {2a40fd15-dfca-4aa8-a654-1f8c654603f6} Writer Instance Id: {d5a32f43-0675-400d-8502-cdece4c867e1} State: [1] Stable Last error: No error Providers.txt: vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2013 Microsoft Corp. Provider name: 'Microsoft File Share Shadow Copy provider' Provider type: Fileshare Provider Id: {89300202-3cec-4981-9171-19f59559e0f2} Version: 1.0.0.1 Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7 The tech said I needed to resolve this issue: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {32f41185-2b20-41ff-a7aa-92c262f578cd} State: [1] Stable Last error: Unexpected error I checked the event viewer and this is the only thing I found related to hyperv: I don't know where to start to resolve this or to find out where the issue is at. I know nothing of the vss writer for hyperv so any input would be greatly appreciated.

    Read the article

  • EPM 11.1.2 - Configure a data source to support Essbase failover in active-passive clustering mode

    - by Ahmed A
    To configure a data source to support Essbase fail-over in active-passive clustering mode, replace the Essbase Server name value with the APS URL followed by the Essbase cluster name; for example, if the APS URL is http://<hostname>:13090/aps and the Essbase cluster name is EssbaseCluster-1, then the value in the Essbase Server name field would be:http://<hostname>:13090/aps/Essbase?clusterName=EssbaseCluster-1Note: Entering the Essbase cluster name without the APS URL in the Essbase Server name field is not supported in this release.

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • What does "Local Domain Name" on router do and how do I get it to work?

    - by Giovanni Galbo
    I have a D-Link DGL-4500 router. One of the settings is "Local Domain Name," which I have set to local (see screenshot). What I expect is for me to be able to hit my computers via name, e.g. m6.local should resolve to one of my computers; but this isn't happening. I know that I can do this via hosts file, but it would be neat if I could do it via the router... plus I have devices like an iPad that don't let you edit the hosts file. Am I misunderstanding this router feature or am I doing something wrong?

    Read the article

  • Juju Zookeeper & Provisioning Agent Not Deployed

    - by Keith Tobin
    I am using juju with the openstack provider, i expected that when i bootstrap that zookeeper and provisioning agent would get deployed on the bootstrap vm in openstack. This dose not seem to be the case. the bootstrap vm gets deployed but it seems that nothing gets deployed to the VM. See logs below, I may be missing something, also how is it possible to log on the bootstrap vm. Could I manual deploy, if so what do I need to do. Juju Bootstrap commend root@cinder01:/home/cinder# juju -v bootstrap 2012-10-12 03:21:20,976 DEBUG Initializing juju bootstrap runtime 2012-10-12 03:21:20,982 WARNING Verification of xxxxS certificates is disabled for this environment. Set 'ssl-hostname-verification' to ensure secure communication. 2012-10-12 03:21:20,982 DEBUG openstack: using auth-mode 'userpass' with xxxx:xxxxxx.10:35357/v2.0/ 2012-10-12 03:21:21,064 DEBUG openstack: authenticated til u'2012-10-13T08:21:13Z' 2012-10-12 03:21:21,064 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors' 2012-10-12 03:21:21,091 DEBUG openstack: 200 '{"flavors": [{"id": "3", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "bookmark"}], "name": "m1.medium"}, {"id": "4", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "bookmark"}], "name": "m1.large"}, {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}], "name": "m1.tiny"}, {"id": "5", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "bookmark"}], "name": "m1.xlarge"}, {"id": "2", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "bookmark"}], "name": "m1.small"}]}' 2012-10-12 03:21:21,091 INFO Bootstrapping environment 'openstack' (origin: ppa type: openstack)... 2012-10-12 03:21:21,091 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:21:21,092 DEBUG openstack: GET 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:21:21,165 DEBUG openstack: 200 '{}\n' 2012-10-12 03:21:21,165 DEBUG Verifying writable storage 2012-10-12 03:21:21,165 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/bootstrap-verify 2012-10-12 03:21:21,166 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/bootstrap-verify' 2012-10-12 03:21:21,251 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:21,251 DEBUG Launching juju bootstrap instance. 2012-10-12 03:21:21,271 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id 2012-10-12 03:21:21,273 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups 2012-10-12 03:21:21,273 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,321 DEBUG openstack: 200 '{"security_groups": [{"rules": [{"from_port": -1, "group": {}, "ip_protocol": "icmp", "to_port": -1, "parent_group_id": 1, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 7}, {"from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": 1, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 38}], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 1, "name": "default", "description": "default"}]}' 2012-10-12 03:21:21,322 DEBUG Creating juju security group juju-openstack 2012-10-12 03:21:21,322 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,401 DEBUG openstack: 200 '{"security_group": {"rules": [], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 48, "name": "juju-openstack", "description": "juju group for openstack"}}' 2012-10-12 03:21:21,401 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,504 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": 48, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 54}}' 2012-10-12 03:21:21,504 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,647 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 1, "group": {"tenant_id": "d5f52673953f49e595279e89ddde979d", "name": "juju-openstack"}, "ip_protocol": "tcp", "to_port": 65535, "parent_group_id": 48, "ip_range": {}, "id": 55}}' 2012-10-12 03:21:21,647 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,791 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 1, "group": {"tenant_id": "d5f52673953f49e595279e89ddde979d", "name": "juju-openstack"}, "ip_protocol": "udp", "to_port": 65535, "parent_group_id": 48, "ip_range": {}, "id": 56}}' 2012-10-12 03:21:21,792 DEBUG Creating machine security group juju-openstack-0 2012-10-12 03:21:21,792 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,871 DEBUG openstack: 200 '{"security_group": {"rules": [], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 49, "name": "juju-openstack-0", "description": "juju group for openstack machine 0"}}' 2012-10-12 03:21:21,871 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/detail 2012-10-12 03:21:21,871 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/detail' 2012-10-12 03:21:21,906 DEBUG openstack: 200 '{"flavors": [{"vcpus": 2, "disk": 10, "name": "m1.medium", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 40, "ram": 4096, "id": "3", "swap": ""}, {"vcpus": 4, "disk": 10, "name": "m1.large", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 80, "ram": 8192, "id": "4", "swap": ""}, {"vcpus": 1, "disk": 0, "name": "m1.tiny", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 512, "id": "1", "swap": ""}, {"vcpus": 8, "disk": 10, "name": "m1.xlarge", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 160, "ram": 16384, "id": "5", "swap": ""}, {"vcpus": 1, "disk": 10, "name": "m1.small", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 20, "ram": 2048, "id": "2", "swap": ""}]}' 2012-10-12 03:21:21,907 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers 2012-10-12 03:21:21,907 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers' 2012-10-12 03:21:22,284 DEBUG openstack: 202 '{"server": {"OS-DCF:diskConfig": "MANUAL", "id": "a598b402-8678-4447-baeb-59255409a023", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "adminPass": "SuFp48cZzdo4"}}' 2012-10-12 03:21:22,284 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id 2012-10-12 03:21:22,285 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id' 2012-10-12 03:21:22,375 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:27,379 DEBUG Waited for 5 seconds for networking on server u'a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:21:27,380 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023 2012-10-12 03:21:27,380 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:21:27,556 DEBUG openstack: 200 '{"server": {"OS-EXT-STS:task_state": "networking", "addresses": {"private": [{"version": 4, "addr": "10.0.0.8"}]}, "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "image": {"id": "5bf60467-0136-4471-9818-e13ade75a0a1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/images/5bf60467-0136-4471-9818-e13ade75a0a1", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "instance-00000060", "flavor": {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}]}, "id": "a598b402-8678-4447-baeb-59255409a023", "user_id": "01610f73d0fb4922aefff09f2627e50c", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "config_drive": "", "status": "BUILD", "updated": "2012-10-12T08:21:23Z", "hostId": "1cdb25708fb8e464d83a69fe4a024dcd5a80baf24a82ec28f9d9f866", "OS-EXT-SRV-ATTR:host": "nova01", "key_name": "", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "juju openstack instance 0", "created": "2012-10-12T08:21:22Z", "tenant_id": "d5f52673953f49e595279e89ddde979d", "metadata": {}}}' 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 2012-10-12 03:21:27,557 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-floating-ips 2012-10-12 03:21:27,557 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-floating-ips' 2012-10-12 03:21:27,815 DEBUG openstack: 200 '{"floating_ips": [{"instance_id": "a0e0df11-91c0-4801-95b3-62d910d729e9", "ip": "xxxx.35", "fixed_ip": "10.0.0.5", "id": 447, "pool": "nova"}, {"instance_id": "b84f1a42-7192-415e-8650-ebb1aa56e97f", "ip": "xxxx.36", "fixed_ip": "10.0.0.6", "id": 448, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.37", "fixed_ip": null, "id": 449, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.38", "fixed_ip": null, "id": 450, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.39", "fixed_ip": null, "id": 451, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.40", "fixed_ip": null, "id": 452, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.41", "fixed_ip": null, "id": 453, "pool": "nova"}]}' 2012-10-12 03:21:27,815 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023/action 2012-10-12 03:21:27,816 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023/action' 2012-10-12 03:21:28,356 DEBUG openstack: 202 '' 2012-10-12 03:21:28,356 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:21:28,357 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:21:28,446 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:28,446 INFO 'bootstrap' command finished successfully Juju Status Command root@cinder01:/home/cinder# juju -v status 2012-10-12 03:23:28,314 DEBUG Initializing juju status runtime 2012-10-12 03:23:28,320 WARNING Verification of xxxxS certificates is disabled for this environment. Set 'ssl-hostname-verification' to ensure secure communication. 2012-10-12 03:23:28,320 DEBUG openstack: using auth-mode 'userpass' with xxxx:xxxxxx.10:35357/v2.0/ 2012-10-12 03:23:28,320 INFO Connecting to environment... 2012-10-12 03:23:28,403 DEBUG openstack: authenticated til u'2012-10-13T08:23:20Z' 2012-10-12 03:23:28,403 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:23:28,403 DEBUG openstack: GET 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:23:35,480 DEBUG openstack: 200 'zookeeper-instances: [a598b402-8678-4447-baeb-59255409a023]\n' 2012-10-12 03:23:35,480 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023 2012-10-12 03:23:35,480 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:23:35,662 DEBUG openstack: 200 '{"server": {"OS-EXT-STS:task_state": null, "addresses": {"private": [{"version": 4, "addr": "10.0.0.8"}, {"version": 4, "addr": "xxxx.37"}]}, "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "image": {"id": "5bf60467-0136-4471-9818-e13ade75a0a1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/images/5bf60467-0136-4471-9818-e13ade75a0a1", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000060", "flavor": {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}]}, "id": "a598b402-8678-4447-baeb-59255409a023", "user_id": "01610f73d0fb4922aefff09f2627e50c", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "status": "ACTIVE", "updated": "2012-10-12T08:21:40Z", "hostId": "1cdb25708fb8e464d83a69fe4a024dcd5a80baf24a82ec28f9d9f866", "OS-EXT-SRV-ATTR:host": "nova01", "key_name": "", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "juju openstack instance 0", "created": "2012-10-12T08:21:22Z", "tenant_id": "d5f52673953f49e595279e89ddde979d", "metadata": {}}}' 2012-10-12 03:23:35,663 DEBUG Connecting to environment using xxxx.37... 2012-10-12 03:23:35,663 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="xxxx.37" remote_port="2181" local_port="45859". 2012-10-12 03:23:36,173:4355(0x7fd581973700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-10-12 03:23:36,173:4355(0x7fd581973700):ZOO_INFO@log_env@662: Client environment:host.name=cinder01 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@679: Client environment:user.name=cinder 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-10-12 03:23:36,175:4355(0x7fd581973700):ZOO_INFO@log_env@699: Client environment:user.dir=/home/cinder 2012-10-12 03:23:36,175:4355(0x7fd581973700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:45859 sessionTimeout=10000 watcher=0x7fd57f9146b0 sessionId=0 sessionPasswd= context=0x2c1dab0 flags=0 2012-10-12 03:23:36,175:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-10-12 03:23:39,512:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-10-12 03:23:42,848:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client ^Croot@cinder01:/home/cinder#

    Read the article

  • Oracle bleibt auch 2011 Spitzenreiter im Bereich Datenbanken

    - by Anne Manke
    Mit der Veröffentlichung der aktuellen Ausgabe "Market Share: All Software Markets, Worldwide 2011" bestätigt das weltweit führende Marktanalyseunternehmen Gartner Oracle's Marktführerschaft im Bereich der Relationellen Datenbank Management Systeme (RDBMS). Oracle konnte innerhalb des letzten Jahres seinen Abstand zu seinen Marktbegleitern im Bereich der RDBMS mit einem stabilen Wachstum von 18% sogar ausbauen: der Marktanteil stieg im Jahr 2010 von 48,2% auf 48,8% im Jahr 2011. Damit ist der Abstand zu Oracle's stärkstem Verfolger IBM auf 28,6%.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Revenue 2010 ($USM) Revenue 2011 ($USM) Growth 2010 Growth 2011 Share 2010 Share 2011 Oracle 9,990.5 11,787.0 10.9% 18.0% 48.2% 48.8% IBM 4,300.4 4,870.4 5.4% 13.3% 20.7% 20.2% Microsoft 3,641.2 4,098.9 10.1% 12.6% 17.6% 17.0% SAP/Sybase 744.4 1,101.1 12.8% 47.9% 3.6% 4.6% Teradata 754.7 882.3 16.9% 16.9% 3.6% 3.7% Source: Gartner’s “Market Share: All Software Markets, Worldwide 2011,” March 29, 2012, By Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;}

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • What is the best Programming Language for Kiosk Application? [closed]

    - by Jen Lin
    I need your suggestions guys regarding the project I'm planning to create. I want to create a kiosk software/application that is capable to access database in a server. (So, there's a networking here..)Because the information that will be displayed in a Kiosk screen will be coming from a database in other computer. So my problem here is, I don't know which programming language is the best for this kind of application. I'm thinking about using Visual Basic 6.0 since my group is comfortable using this programming language, but I also want to consider the design. I don't like a plain button. Hope to hear from you guys, thanks much :)

    Read the article

  • How to create an alias for linux server name?

    - by Radek
    The openSUSE server name is 'darkhelmet'. I want to create an alias 'dh' for it. So I can also type 'ssh dh' and 'http://dh' would work too. What file/files and where do I have to edit to make this happen? Extract from /etc/hosts from darkhelmet 127.0.0.1 localhost # special IPv6 addresses ::1 localhost ipv6-localhost ipv6-loopback fe00::0 ipv6-localnet ff00::0 ipv6-mcastprefix ff02::1 ipv6-allnodes ff02::2 ipv6-allrouters ff02::3 ipv6-allhosts 127.0.0.2 darkhelmet.edumate darkhelmet 10.0.0.22 db2workgroup db2workgroup [root][skroob] nslookup darkhelmet Server: 10.0.0.10 Address: 10.0.0.10#53 Name: darkhelmet.edumate Address: 10.0.0.22

    Read the article

  • Deploying play! 2.0 application on an apache server with a reverse proxy

    - by locrizak
    I'm trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this: <Directory /var/www/stage.domain.com AllowOverride None Order Deny,Allow Deny from all </Directory <VirtualHost *:80 DocumentRoot /var/www/stage.domain.com/web ServerName stage.domain.com ServerAdmin [email protected] # ProxyRequests Off # ProxyPreserveHost On <Proxy * Order allow,deny Allow from all </Proxy # ProxyVia On # ProxyPass /play/ http://localhost:9000/ # ProxyPassReverse /play/ http://localhost:9000/ ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log ErrorDocument 400 /error/400.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 405 /error/405.html ErrorDocument 500 /error/500.html ErrorDocument 502 /error/502.html ErrorDocument 503 /error/503.html <IfModule mod_ssl.c </IfModule <Directory /var/www/stage.domain.com/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory <Directory /var/www/clients/client2/web7/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory # Clear PHP settings of this website <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch # mod_php enabled AddType application/x-httpd-php .php .php3 .php4 .php5 php_admin_value sendmail_path "/usr/sbin/sendmail -t -i [email protected]" php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp php_admin_value session.save_path /var/www/clients/client2/web7/tmp # PHPIniDir /var/www/conf/web7 php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$ # add support for apache mpm_itk <IfModule mpm_itk_module AssignUserId web7 client2 </IfModule <IfModule mod_dav_fs.c # Do not execute PHP files in webdav directory <Directory /var/www/clients/client2/web7/webdav <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch </Directory # DO NOT REMOVE THE COMMENTS! # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE! # WEBDAV BEGIN # WEBDAV END </IfModule # <Location /play/ # ProxyPass http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 # </Location ProxyRequests Off ProxyPass /play/ http://localhost:9000/ ProxyPassReverse /play/ localhost:9000/ ProxyPass /play http://localhost:9000/ ProxyPassReverse /play http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 </VirtualHost This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a '502 Bad Gateway`. I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above. My application.conf file looks like this: db info ....... application.mode=PROD logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG http.path="/play/" XForwardedSupport="127.0.0.1" And my hosts file looks like this (I have never changed or added anything to the host file): 127.0.0.1 localhost 127.0.1.1 matrix # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!! Edit Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It's like Netty refuses the connection to the page. Edit 2 output from apachectl -S _default_:8081 127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10) *:8090 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) *:80 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

    Read the article

  • rake aborted! undefined local variable or method

    - by Subhransu
    In a fresh new Ubuntu machine, I have installed ruby with sudo apt-get install ruby1.8 and then installed rubygem1.8 with : sudo apt-get install rubygems and after that installed rails3.2.8 with : gem install rails The procedure was very simple. But here comes the problem. When I tried checking the version of rake with rake --trace -version I got the following error: rake aborted! undefined local variable or method `rsion' for #<Rake::Application:0xb72c731c> /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:316:in `standard_rake_options' /usr/lib/ruby/1.8/optparse.rb:1298:in `eval' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:316:in `standard_rake_options' /usr/lib/ruby/1.8/optparse.rb:1298:in `call' /usr/lib/ruby/1.8/optparse.rb:1298:in `parse_in_order' /usr/lib/ruby/1.8/optparse.rb:1254:in `catch' /usr/lib/ruby/1.8/optparse.rb:1254:in `parse_in_order' /usr/lib/ruby/1.8/optparse.rb:1248:in `order!' /usr/lib/ruby/1.8/optparse.rb:1339:in `permute!' /usr/lib/ruby/1.8/optparse.rb:1360:in `parse!' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:425:in `handle_options' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:74:in `init' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:72:in `init' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:64:in `run' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /var/lib/gems/1.8/gems/rake-0.9.2.2/lib/rake/application.rb:63:in `run' /var/lib/gems/1.8/gems/rake-0.9.2.2/bin/rake:33 /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19 Is it the problem due to I have installed straight from ubuntu apt-get package manager ?

    Read the article

  • What is the best way to register a domain name in China?

    - by Trevor Allred
    What is the best (cost and safety) to register a .cn domain name? We recently received 2 emails from companies (px-vps.org and one other) in China saying that another company was trying to steal/register our .com domain name in china (.cn). They then gave us a list of 15 domains from China to India that we should register through their company. Now they are saying we need to register for a 5 year minimum at $100 per domain. It's starting to sound like a $10,000 scam. We called 101domains and they said it would be $30 for the registration fee and $30 for the law firm in Shanghai. Who should I go through to avoid spending a lot of money and be sure we don't get ripped off in the process?

    Read the article

  • About Entitlement Grants in ADF Security of JDeveloper 11.1.1.4

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle JDeveloper 11.1.1.4 comes with a new ADF Security feature called "entitlement grants". This has nothing to do with Oracle Entitlement Server (OES) but is the ability to group resources into permission sets so they can be granted with a single grant statement. For example, as good practices when organizing your projects, you may have grouped your bounded task flows by functionality and responsibility in sub folders under the WEB-INF directory. If one of the folders holds bounded task flows that are accessible to all authenticated users, you may create an entitlement grant allAuthUserBTF and select all bounded task flows that are accessible for authenticated users as resources. You can then grant allAuthUserBTF to the authenticated-role so that with only a single grant statement all selected bounded task flows are protected. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <permission-sets>         <permission-set>             <name>PublicBoundedTaskFlows</name>             <member-resources>               <member-resource>                 <resource-name>                      /WEB-INF/public/home-btf.xml#home-btf                 </resource-name>                 <type-name-ref>TaskFlowResourceType</type-name-ref>                 <display-name> ... </display-name>                 <actions>view</actions>               </member-resource>               <member-resource>                 <resource-name>                         /WEB-INF/public/preferences-btf.xml#preferences-btf                </resource-name>                 <type-name-ref>TaskFlowResourceType</type-name-ref>                 <display-name>...</display-name>                 <actions>view</actions>               </member-resource>             </member-resources>           </permission-set>   </permission-sets> The grant statement for this permission set is added as shown below <grant>   <grantee>     <principals>        <principal>             <name>authenticated-role</name>             <class>oracle.security.jps.internal.core.principals.JpsAuthenticatedRoleImpl</class>         </principal>       </principals>     </grantee>     <permission-set-refs>         <permission-set-ref>            <name>PublicBoundedTaskFlows</name>         </permission-set-ref>      </permission-set-refs> </grant>

    Read the article

  • SOA &amp; Application Grid Specialization step 2 of 6 &ndash; References &amp; Marketing Kits

    - by Jürgen Kress
    In our fist step to become SOA Specialized & Application Grid Specialized we highlighted our OMM to register your opportunities. We continue our path to specialization with our marketing offerings to create your reference cases and run joint marketing campaigns. References: Be Recognized Through Partner Success Stories Oracle delivers a wide variety of services and solutions through our partners and we believe that those successes should be recognized and promoted. References are also required to become specialized. We showcase our partners’ capabilities in Oracle products and industries through partner success stories that are published on Oracle.com. For significant implementations, we may invite partners to participate in a press release or be interviewed in a podcast. To participate and take a further step to become specialized, please take a minute to complete the form and tell us about the successful project you have implemented. If your story is selected, we will contact you for an interview. Create your references The partner reference program Enables partners to be recognized by both Oracle and our customers Provides an opportunity for partners to showcase successes with their customers on Oracle solutions Helps raise awareness of our partners’ capabilities, elevating them above their competition Time to submit a SOA and Application Grid reference request today To learn more about partner references, check out the following resources: Judson Althoff’s YouTube Video: Be Recognized with OPN Specialized Reference Program OPN PartnerCast: Be Recognized…Your Reference Matters!!! (MP3) Partner/Customer Reference Brochure (PDF) Marketing Kits We have created OFM 11g marketing kit http://tinyurl.com/soamarketing (OPN account required) The marketing kit includes all the ppts and demos from our launch event. Oracle package includes: • Event templates like invitation, agenda ,confirmation follow up templates • OFM 11g presentations • Free usage of the Oracle Customer Visit Center • Condition: mandatory lead registration in the Oracle Open Market Model (OMM) To download the material, please make sure that you select the campaign “Enterprise: Fusion Middleware 11g”: OFM 11g Oracle Marketing 4 Partners Package http://tinyurl.com/soamarketing (OPN account required)   For more information on Specialization please visit our OPN Specialized Webcast Series And become a member in our SOA Partner Community for registration please visit www.oracle.com/goto/ema/soa Jürgen Kress, SOA Partner Adoption EMEA SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Oracle Application Grid Sales Specialist  SOA Pre-Sales assessment 3 Oracle Application Grid PreSales Specialist Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Gridplementation assessment 4

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >