Search Results

Search found 23301 results on 933 pages for 'check in policy'.

Page 185/933 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Sharepoint Web Part validation set off by Publishing Controls

    - by dunxd
    I have a web part which uses validation on a textbox field to prevent submission of an empty field. When clicking on Check In to Share Draft or Publish, this validation is done, and since I am not actually trying to submit the form, but rather check it in, I'd rather this didn't happen. How can I achieve this? See also: Sharepoint web part form validation blocks updating web part settings - this has the validation code, and how I solved the problem of the EditorPart setting off the validation.

    Read the article

  • PHP checking $_POST

    - by sea_1987
    I have some form fields that when a form is submitted creates an array within the $_POST, I needing to check the this array has atleast 4 keys, how can I check that? I have no idea

    Read the article

  • Decoding with html_entity_decode, and then having problem with &nbsp;

    - by AleGore
    After decoding the special character &nbsp; with the html_entity_decode function, I get spaces in replace of &nbsp; entities. My problem is when I check if if ($decoded_str[5] == ' ') it isn't true, though in $decoded_str[5] there is a space that was &nbsp; before decoding. How to settle the matter? I need to be able to check it like this way: if ($decoded_str[5] == ' ')

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    Hi. I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • Frequently Asked Questions about Latest EBS Support Changes

    - by Steven Chan (Oracle Development)
    Two important changes to the Oracle Lifetime Support policies for Oracle E-Business Suite were announced at OpenWorld.  These changes affect EBS Releases 11i and 12.1. The changes are detailed in this My Oracle Support document: E-Business Suite 11.5.10 Sustaining Support Exception & 12.1 Extended Support Now to Dec. 2018 (Note 1495337.1) A new document answering the top Frequently Asked Questions about these support changes is now available: E-Business Suite Releases - Support Policy FAQ (Note 1494891.1) Questions answered in this new FAQ include: Why is Oracle providing an exception for Severity 1 Production Support for the first year of Sustaining Support for EBS 11.5.10? Will customers need to purchase an additional contract for the 11.5.10 Exception to Sustaining Support? What defines Severity 1 Production Support in the 11.5.10 Exception to Sustaining Support? What are the differences in the Lifetime Support Policy feature benefits from Extended Support to the Severity 1 Production Support in the 11.5.10 Exception to Sustaining Support? More questions about US 1099, Payroll legislative updates, security patches, and more 1. Changes for EBS 11i Sustaining Support The first change is that  we will be providing an exception for the first 13 months of Sustaining Support on Oracle E-Business Suite Release 11.5.10 (11i10), valid from December 1, 2013 – December 31, 2014. This exception support will be comprised of three components: New fixes for Severity 1 production issues United States Form 1099 2013 year-end updates Payroll regulatory updates for the United States, Canada, United Kingdom, and Australia for fiscal years ending in 2014 Customers environments must have the minimum baseline patches (or above) for new Severity 1 production bug fixes as documented here: Patch Requirements for Extended Support of Oracle E-Business Suite Release 11.5.10 (Note 883202.1) 2. Changes for EBS 12.1 Extended Support More time:  Extended Support period for E-Business Suite Release 12.1 has been extended by nineteen months through December, 2018. Customers with an active Oracle Premier Support for Software contract will automatically be entitled to Extended Support for E-Business Suite 12.1. Fees waived:  Uplift fees are waived for all years of Extended Support (June, 2014 – December. 2018) for customers with an active Oracle Premier Support for Software contract. During this period, customers will receive all of the components of Extended Support at no additional cost other than their fees for Software Update License & Support. Where can I learn more? There are two interlocking policies that affect the E-Business Suite:  Oracle's Lifetime Support policies for each EBS release (timelines which were updated by this announcement), and the Error Correction Support policies (which state the minimum baselines for new patches). For more information about how these policies interact, see: Understanding Support Windows for E-Business Suite Releases What about E-Business Suite technology stack components?Things get more complicated when one considers individual techstack components such as Oracle Forms or the Oracle Database.  To learn more about the interlocking EBS+techstack component support windows, see these two articles: On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support: A Primer for E-Business Suite Users Related Articles Extended Support Fees Waived for E-Business Suite 11i and 12.0 EBS 12.0 Minimum Requirements for Extended Support Finalized

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • Upgrading PHP from 5.1 to 5.2 on CentOS 5.4

    - by andufo
    i'm trying to upgrade php 5.1 to 5.2 on a CentOS 5.4 I use: yum upgrade php The result is this (check out the last part): [root@mail httpd]# yum update php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirrors.serveraxis.net * centosplus: mirrors.tummy.com * contrib: mirror.raystedman.net * extras: mirror.raystedman.net * updates: mirrors.netdna.com Setting up Update Process Resolving Dependencies --> Running transaction check --> Processing Dependency: php = 5.1.6-27.el5 for package: php-devel --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-cli = 5.2.10-1.el5.centos for package: php --> Processing Dependency: php-common = 5.2.10-1.el5.centos for package: php --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-cli.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-xml --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-pdo --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-gd --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-ldap --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mbstring --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mysql --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-imap ---> Package php-common.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-devel.x86_64 0:5.2.10-1.el5.centos set to be updated --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-gd.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-imap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-ldap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mbstring.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mysql.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-pdo.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-xml.x86_64 0:5.2.10-1.el5.centos set to be updated --> Finished Dependency Resolution php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 from installed has depsolving problems --> Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) Error: Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@mail httpd]# What are the consequences of using --skip-broken? Any recommendations?

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • C#/.NET Little Wonders: The Useful But Overlooked Sets

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>.  Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately. A Background From Math In mathematical terms, a set is an unordered collection of unique items.  In other words, the set {2,3,5} is identical to the set {3,5,2}.  In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2).  In addition, you can perform set arithmetic on sets such as: Intersections: The intersection of two sets is the collection of elements common to both.  Example: The intersection of {1,2,5} and {2,4,9} is the set {2}. Unions: The union of two sets is the collection of unique items present in either or both set.  Example: The union of {1,2,5} and {2,4,9} is {1,2,4,5,9}. Differences: The difference of two sets is the removal of all items from the first set that are common between the sets.  Example: The difference of {1,2,5} and {2,4,9} is {1,5}. Supersets: One set is a superset of a second set if it contains all elements that are in the second set. Example: The set {1,2,5} is a superset of {1,5}. Subsets: One set is a subset of a second set if all the elements of that set are contained in the first set. Example: The set {1,5} is a subset of {1,2,5}. If We’re Not Doing Math, Why Do We Care? Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation?  The answer is simple: they are extremely efficient ways to determine ownership in a collection. For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order.  Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for.  Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to. Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list.  If it is, great, we already have that market data feed!  If not, then and only then should we subscribe to the feed for that symbol. So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it.  This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead: 1: // class that handles are order processing service 2: public sealed class OrderProcessor 3: { 4: // contains list of all symbols we are currently subscribed to 5: private readonly List<string> _subscriptions = new List<string>(); 6:  7: ... 8: } Now whenever you are adding a new order, it would look something like: 1: public PlaceOrderResponse PlaceOrder(Order newOrder) 2: { 3: // do some validation, of course... 4:  5: // check to see if already subscribed, if not add a subscription 6: if (!_subscriptions.Contains(newOrder.Symbol)) 7: { 8: // add the symbol to the list 9: _subscriptions.Add(newOrder.Symbol); 10: 11: // do whatever magic is needed to start a subscription for the symbol 12: } 13:  14: // place the order logic! 15: } What’s wrong with this?  In short: performance!  Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection. (I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science). Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion.  Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.  Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection.  Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found. Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage.  So performing a sort after each add would probably add more time.  As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value. This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.  The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface.  As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering. HashSet<T> – For Unordered Storage of Sets When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>.  That is, a Dictionary where the TKey and TValue refer to the same object.  This is really an oversimplification, but logically it makes sense.  I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value. As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups!  Compare the times below between HashSet<T> and List<T>: Operation HashSet<T> List<T> Add() O(1) O(1) at end O(n) in middle Remove() O(1) O(n) Contains() O(1) O(n)   Now, these times are amortized and represent the typical case.  In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two. The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains!  This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection.  Compare this to the List where almost any add or remove must rearrange potentially all the elements!  And to find an item in the list (if unsorted) you must search every item in the List. So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection. And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well. All we have to do to switch from a List<T> to a HashSet<T>  is change our declaration.  Since List and HashSet support many of the same members, chances are we won’t need to change much else. 1: public sealed class OrderProcessor 2: { 3: private readonly HashSet<string> _subscriptions = new HashSet<string>(); 4:  5: // ... 6:  7: public PlaceOrderResponse PlaceOrder(Order newOrder) 8: { 9: // do some validation, of course... 10: 11: // check to see if already subscribed, if not add a subscription 12: if (!_subscriptions.Contains(newOrder.Symbol)) 13: { 14: // add the symbol to the list 15: _subscriptions.Add(newOrder.Symbol); 16: 17: // do whatever magic is needed to start a subscription for the symbol 18: } 19: 20: // place the order logic! 21: } 22:  23: // ... 24: } 25: Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>.  Thus, we can pick up the performance improvements in this case with minimal code changes. SortedSet<T> – Ordered Storage of Sets Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>. The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order.  Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use. So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally.  Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.  There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>.  Let’s compare the three: Operation HashSet<T> SortedSet<T> List<T> Add() O(1) O(log n) O(1) at end O(n) in middle Remove() O(1) O(log n) O(n) Contains() O(1) O(log n) O(n)   The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error.  There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity.  Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.  Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection.  This is very good performance!  It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing. Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point: Collection Size O(1) Visits O(log n) Visits O(n) Visits 1 1 1 1 10 1 4 10 100 1 7 100 1000 1 10 1000   Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count.  This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).  So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>. Unique Set Operations Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before: IntersectWith() – Performs the set intersection of two sets.  Modifies the current set so that it only contains elements also in the second set. UnionWith() – Performs a set union of two sets.  Modifies the current set so it contains all elements present both in the current set and the second set. ExceptWith() – Performs a set difference of two sets.  Modifies the current set so that it removes all elements present in the second set. IsSupersetOf() – Checks if the current set is a superset of the second set. IsSubsetOf() – Checks if the current set is a subset of the second set. For more information on the set operations themselves, see the MSDN description of ISet<T> (here). What Sets Don’t Do Don’t get me wrong, sets are not silver bullets.  You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for. Also sets don’t store temporal add-order.  That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it.  This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can. Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection.  Iterating over all the items in a List<T> is generally much, much faster than iterating over a set. Summary Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value.  In addition, if you have need for the mathematical set operations, the C# sets support those as well.  The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order.  In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.   Technorati Tags: C#,.Net,Little Wonders,BlackRabbitCoder,ISet,HashSet,SortedSet

    Read the article

  • Java 6 Certified with Forms and Reports 10g for EBS 12

    - by John Abraham
    Java 6 is now certified with Oracle Application Server 10g Forms and Reports with Oracle E-Business Suite Release 12 (12.0.6, 12.1.1 and higher). What? Wasn't this already certified? No, but a little background might be useful in understanding why this is a new announcement. We previously certified the use of Java 6 with E-Business Suite Release 12 -- with the sole exception of Oracle Application Server 10g components in the E-Business Suite technology stack. Oracle Application Server 10g originally included Java 1.4.2 as part of its distribution.  E-Business Suite 12 uses, amongst other things, the Oracle Forms and Reports 10g components running on Java 1.4. Java 1.4 in the Oracle Application Server 10g ORACLE_HOME is used exclusively by AS 10g Forms and Reports' for Java functionality.  This version of Java is separate from the Java distribution used by other parts of EBS such as Oracle Containers for Java (OC4J). What's new about this certification? You can now upgrade the older Java 1.4 libraries used by Oracle Forms & Reports 10g to Java 6. This allows you to upgrade the Java releases within the Oracle Application Server 10g ORACLE_HOME to the the same level as the rest of your E-Business Suite technology stack components. Why upgrade? This becomes particularly important for customers as individual vendors' support lifecycle for Java 1.4 reaches End of Life: Oracle's Sun JDK Release 1.4.2's End of Extended Support: February 2013 (Sustaining Support indefinitely after) IBM SDK and JRE 1.4.2's End of Service: September 2013 HP-UX Java 1.4.2's End-of-Life : May 2012 Along with Oracle Forms, Java lies at the heart of the Oracle E-Business Suite.  Small improvements in Java can have significant effects on the performance and stability of the E-Business Suite.  As a notable side-benefit, later versions of Java have improved built-in and third-party tools for JVM performance monitoring and tuning.Our standing recommendation is that you always stay current with the latest available Java update provided by your operating system vendor.  Don't forget to upgrade Forms & Reports to 10.1.2.3 E-Business Suite 12 originally shipped with Oracle Application Server 10g Forms & Reports 10.1.2.0.2.  That version is no longer eligible for Error Correction Support. New Forms and Reports 10g patches are now being released with Forms and Reports 10.1.2.3 as the prerequisite. Forms and Reports 10.1.2.3 was certified for EBS 12 environments in November 2008. If you haven't upgraded your EBS 12 environment to Forms & Reports 10.1.2.3, this is a good opportunity to do so. References Using Latest Update of Java 6.0 with Oracle E-Business Suite Release 12 (My Oracle Support Document 455492.1) Overview of Using Java with Oracle E-Business Suite Release 12 (My Oracle Support Document 418664.1) Oracle Lifetime Support Policy (Oracle Fusion Middleware) IBM Developer Kit Lifecycle Dates HP-UX Java - End of Life Policy & Release Naming Terminology Related Articles OracleAS 10g Forms and Reports 10.1.2.3 Certified With EBS R12 Java 6 Certified with E-Business Suite Release 12

    Read the article

  • In what (small) ways can I modify Octave's compile options to enhance it without breaking it?

    - by irrational John
    If the title to this question seems a bit vague, I am sorry. But I wasn't sure how to distill what I am attempting to do into a single sentence. A few weeks back I learned that I could build and install recent releases of Octave on an Ubuntu 12.04 system by following the steps below. Install the tools needed to compile, link, and run octave. For Ubuntu the commands below have worked for me. sudo apt-get build-dep octave3.2 sudo apt-get install build-essential gnuplot gtk2-engines-pixbuf sudo apt-get install libfontconfig-dev bison Next, download the source code for an Octave release from the Gnu Project Archives for Octave and unpack the archive into a folder on your system. Use the commands below to build, check, and install octave. ./configure make make check sudo make install Unfortunately it turns out that the above builds an Octave that contains all the debugging symbol tables. The object files alone are huge taking up around 1.7 GB. The current Octave documentation suggests To compile without debugging symbols try the command make CFLAGS=-O CXXFLAGS=-O LDFLAGS= instead of just make. However, when I tried this it did not work. The -g option was still used for the compiles. For the heck of it I instead tried ./configure CFLAGS=-O CXXFLAGS=-O and this did work. (Instead of ~1.7GB the result of the build now takes up around 253MB). My questions are Is this actually the correct (recommended?) method to use to compile Octave without debugging symbols (i.e. without -g)? How would I compile Octave so it uses x86_64 rather than x86? Note: I am not asking how to compile Octave to use the (experimental) 64-bit integers for array dimensions. I just want to allow the compiler to use the extra registers and word sizes available when an app runs in 64-bit mode. Is a (more) complete list available for the directives used with the Octave Makefile? I have only seen make, make check, and make install documented. But apparently make distclean is also allowed. (It removes the compilation results so you can do a complete rebuild of everything.) I'm wondering what else might be available. FWIW, I have tried using ./configure CFLAGS="-O3 -mtune=core2 -m64" CXXFLAGS="-O3 -mtune=core2 -m64" and, surprisingly, it not only appeared to build, but also ran and passed the make check tests. The ./configure script even gave me the (deceptively?) reassuring message "Octave is now configured for x86_64-unknown-linux-gnu". But of course that's not the same thing as saying it actually "works". Is there a recommended way to enable Octave to run as an x86_64 app? I have also tried looking inside the Octave Makefile to see if I could decipher what command line directives it accepts. I got nowhere. I have not a single clue as to how that Makefile does whatever it is that it does.

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I deploy building blocks (quick parts) for Microsoft Outlook 2007?

    - by now
    I want to deploy some building blocks for Microsoft Outlook 2007. Microsoft has put up a poor solution at http://office.microsoft.com/en-us/outlook/HA102086531033.aspx#4 that asks you to save a template. That solution would require you to distribute that template to all the clients. An optimal solution would allow you to put the template containing the building blocks somewhere on the network and simply use the ”Workgroup building blocks path” group policy setting for shared paths in Microsoft Office 2007. Sadly, Outlook doesn’t respect that policy. Also, the described solution mentioned in the article above doesn’t work. Step 4 requests you to save the template as a Word Template after first asking you to save it as an Outlook Template. It seems that they copy&pasted the steps from the Word article and forgot to check whether it worked (and adjust the steps accordingly). Anyway, does anyone have any suggestions for how to distribute the building blocks without distributing NormalEmail.dotm (which will overwrite the clients’ own building blocks each time it is updated). Thanks!

    Read the article

  • Apk Expansion Files - Application Licensing - Developer account - NOT_LICENSED response

    - by mUncescu
    I am trying to use the APK Expansion Files extension for Android. I have uploaded the APK to the server along with the extension files. If the application was previously published i get a response from the server saying NOT_LICENSED: The code I use is: APKExpansionPolicy aep = new APKExpansionPolicy(mContext, new AESObfuscator(getSALT(), mContext.getPackageName(), deviceId)); aep.resetPolicy(); LicenseChecker checker = new LicenseChecker(mContext, aep, getPublicKey(); checker.checkAccess(new LicenseCheckerCallback() { @Override public void allow(int reason) { @Override public void dontAllow(int reason) { try { switch (reason) { case Policy.NOT_LICENSED: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_UNLICENSED); break; case Policy.RETRY: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); break; } } finally { setServiceRunning(false); } } @Override public void applicationError(int errorCode) { try { mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); } finally { setServiceRunning(false); } } }); So if the application wasn't previously published the Allow method is called. If the application was previously published and now it isn't the dontAllow method is called. I have tried: http://developer.android.com/guide/google/play/licensing/setting-up.html#test-response Here it says that if you use a developer or test account on your test device you can set a specific response, I use LICENSED as response and still get NOT_LINCESED. Resetting the phone, clearing google play store cache, application data. Changing the versioncode number in different combinations still doesn't work. Edit: In case someone else was facing this problem I received an mail from the google support team We are aware that newly created accounts for testing in-app billing and Google licensing server (LVL) return errors, and are working on resolving this problem. Please stay tuned. In the meantime, you can use any accounts created prior to August 1st, 2012, for testing. So it seems to be a problem with their server, if I use the main developer thread everything works fine.

    Read the article

  • Security error accessing Service outside of FlexBuilder

    - by MikeHoss
    I'm very new to Flex and I have what I think it a head-scratcher. I am building a little Flash app that will consume some web services over HTTP. When I am in Flexbuilder and run my app there, it works fine. When I goto to my FlexBuilder project on my OS and double-click on it, it works fine. When I zip up my bin-debug file, I get this error: Security error accessing url faultCode:Channel.Security.Error faultString: 'Security error accessing url' faultDetail:'Destination: DefaultHTTP' So I googled that and got information on about the crossdomain.xml file. Well, I can't put a crossdomain file in the service I am calling, but I can put one somewhere else. So I put the following lines in Flex app: Security.allowDomain("vx1391"); Security.loadPolicyFile("http://vx1391:8080/job/Remote%20FIT%20Runner/ws/trunk/flash-cross-domain.xml"); My cross-domain.xml file is wide-open: &lt;cross-domain-policy&gt; &lt;allow-access-from domain="*"/&gt; </cross-domain-policy> Which I know is bad in a prod enivironment, but right now I just need to get this working locally but outside of FlexBuilder. Anyone want to help out this Flex-noob?

    Read the article

  • Login Facebook using Web-Harvest

    - by parin
    I tried to login Facebook using Web-Harvest. I used the following xml code to login < ?xml version="1.0" encoding="UTF-8"? < config charset="ISO-8859-1" < file action="write" path="homepage.xml" charset="UTF-8" < html-to-xml < http method="post" url="http://www.facebook.com/login.php" cookie-policy="browser" < http-param name="email"myemail < http-param name="pass"mypassword < /http < /html-to-xml < /file The homepage.xml (output) file contains the xml code for the login page of facebook along with the following lines: < h2 class="main_message" id="standard_error"Cookies Required< /h2< p class="sub_message" id="standard_explanation"Cookies are not enabled on your browser. Please adjust this in your security preferences before continuing.< /p I tried all the allowed values for cookie-policy in the http processor of the xml code but was unsuccessful.

    Read the article

  • Java: What are the various available security settings for applets

    - by bguiz
    I have an applet that throws this exception when trying to communicate with the server (running on localhost). This problem is limited to Applets only - a POJO client is able to communicate with the exact same server without any problem. Exception in thread "AWT-EventQueue-1" java.security.AccessControlException: access denied (java.net .SocketPermission 127.0.0.1:9999 connect,resolve) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) My applet.policy file's contents is: grant { permission java.security.AllPermission; }; My question is what are the other places where I need to modify my security settings to grant an Applet more security settings? Thank you. EDIT: Further investigation has lead me to find that this problem only occurs on some machines - but not others. So it could be a machine level (global) setting that is causing this, rather than a application-specific setting such as the one in the applet.policy file. EDIT: Another SO question: Socket connection to originating server of an unsigned Java applet This seems to describe the exact same problem, and Tom Hawtin - tackline 's answer provides the reason why (a security patch released that disallows applets from connecting to localhost). Bearing this in mind, how do I grant the applet the security settings such that in can indeed run on my machine. Also why does it run as-is on other machines but not mine?

    Read the article

  • Activity has leaked window that was originally added

    - by Pentium10
    What's this error, why happens? 05-17 18:24:57.069: ERROR/WindowManager(18850): Activity com.mypkg.myP has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@44c46ff0 that was originally added here 05-17 18:24:57.069: ERROR/WindowManager(18850): android.view.WindowLeaked: Activity ccom.mypkg.myP has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@44c46ff0 that was originally added here 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.view.ViewRoot.<init>(ViewRoot.java:231) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:148) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:91) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.view.Window$LocalWindowManager.addView(Window.java:424) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.Dialog.show(Dialog.java:239) 05-17 18:24:57.069: ERROR/WindowManager(18850): at com.mypkg.myP$PreparePairingLinkageData.onPreExecute(viewP.java:183) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.os.AsyncTask.execute(AsyncTask.java:391) 05-17 18:24:57.069: ERROR/WindowManager(18850): at com.mypkg.myP.onCreate(viewP.java:94) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2544) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2621) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.ActivityThread.access$2200(ActivityThread.java:126) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1932) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.os.Handler.dispatchMessage(Handler.java:99) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.os.Looper.loop(Looper.java:123) 05-17 18:24:57.069: ERROR/WindowManager(18850): at android.app.ActivityThread.main(ActivityThread.java:4595) 05-17 18:24:57.069: ERROR/WindowManager(18850): at java.lang.reflect.Method.invokeNative(Native Method) 05-17 18:24:57.069: ERROR/WindowManager(18850): at java.lang.reflect.Method.invoke(Method.java:521) 05-17 18:24:57.069: ERROR/WindowManager(18850): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 05-17 18:24:57.069: ERROR/WindowManager(18850): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 05-17 18:24:57.069: ERROR/WindowManager(18850): at dalvik.system.NativeStart.main(Native Method)

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >