Search Results

Search found 28747 results on 1150 pages for 'switch case'.

Page 177/1150 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • Creating a layer of abstraction over the ORM layer

    - by Daok
    I believe that if you have your repositories use an ORM that it's already enough abstracted from the database. However, where I am working now, someone believe that we should have a layer that abstract the ORM in case that we would like to change the ORM later. Is it really necessary or it's simply a lot of over head to create a layer that will work on many ORM? Edit Just to give more detail: We have POCO class and Entity Class that are mapped with AutoMapper. Entity class are used by the Repository layer. The repository layer then use the additional layer of abstraction to communicate with Entity Framework. The business layer has in no way a direct access to Entity Framework. Even without the additional layer of abstraction over the ORM, this one need to use the service layer that user the repository layer. In both case, the business layer is totally separated from the ORM. The main argument is to be able to change ORM in the future. Since it's really localized inside the Repository layer, to me, it's already well separated and I do not see why an additional layer of abstraction is required to have a "quality" code.

    Read the article

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • Jupiter Changes Display when using volume key bindings

    - by user87797
    I recently downloaded jupiter on ubuntu 12.04 ASUS U36JC and I noticed that my display would switch from clone to exterior to clone everytime that I used the keys that control the volume. Hit "function volume down" and the screen goes blank, hit "function volume up" and the display come back. This only happens when I am running jupiter and I assume there is a key binding issue, but I don't know how to fix it. No options other than the regular ones show up in the keyboard settings under system tools.

    Read the article

  • How to configure cookieless virtual host in Apache2?

    - by xzyfer
    We run over a hundred web applications (growing daily) on a LAMP stack using Apache2 on Ubuntu 10.04. We've would like all requests to static content to be cookieless. We host applications on many different domains, a majority of which as SaSS applications. Many of the domains host instances of the applications on sub domains, ie. myapp.example.com, myapp2.example.com myapp.otherexample.com etc.. At the moment all static content is server relative to the (sub)domain requesting it. As far as I understand the process, I would need to setup a new domain, eg. staticexample.com. In this case is special configuration in the virtual host for this domain required to ensure no cookies are served? Also, would it be possible to instead use static.example.com? In this case what configurations would I need in my virtual host for this subdomain to ensure no cookies are served?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Using orientation to calculate position on Windows Phone 7

    - by Lavinski
    I'm using the motion API and I'm trying to figure out a control scheme for the game I'm currently developing. What I'm trying to achive is for a orienation of the device to correlate directly to a position. Such that tilting the phone forward and to the left represents the top left position and back to the right would be the bottom right position. Photos to make it clearer (the red dot would be the calculated position). Forward and Left Back and Right Now for the tricky bit. I also have to make sure that the values take into account left landscape and right landscape device orientations (portrait is the default so no calculations would be needed for it). Has anyone done anything like this? Notes: I've tried using the yaw, pitch, roll and Quaternion readings. Sample: // Get device facing vector public static Vector3 GetState() { lock (lockable) { var down = Vector3.Forward; var direction = Vector3.Transform(down, state); switch (Orientation) { case Orientation.LandscapeLeft: return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(-rightAngle)); case Orientation.LandscapeRight: return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(rightAngle)); } return direction; } }

    Read the article

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications

    - by mesriniv
    Blog by Meera Srinivasan (Oracle Product Management) Today afternoon (10/2/2012), Mohan Kamath, and I (Meera Srinivasan) delivered an Open World session on how Oracle Fusion Applications (the next generation business applications from Oracle), use Oracle BPM, Oracle SOA and Oracle ADF products. These adoption patterns can be applied in a generic manner to produce process-centric, user-centric, highly customizable and extensible next generation application. The session was well attended and we had lively discussions with the attendees during Q & A. We started with why as an application developer, you should look at BPM for creating a process-centric application and presented the following fusion adoption patterns Model driven agile development Customization and Extension Guided Process Interactions Personalization and Customization of End User Interfaces Approval Flows Fusion HCM, On Boarding Process - Activity Guide Interface was used as an example for the Guided Process Interactions adoption pattern and the Fusion CRM BPM Process Templates for Customization adoption pattern. In the Personalization and Customization of End User Interfaces section, we looked at how ADF is used within Oracle BPM and the various options available to customize end user interfaces. We also presented how Oracle Procurement does complex approvals using Rules and Approval Management Extensions. We hope you found the session useful, and please do try to attend Heidi’s session on dynamic case management: Case Management Patterns with Oracle Unified Business Process Management Suite. Marriott Marquis - Salon 7, Thu 11:15 AM - 12:15 PM

    Read the article

  • Not able to change flash settings

    - by Zeyu
    My Gnome crashes constantly when I open web pages containing flash so I tried to turn off the hardware acceleration. I opened the Adobe flash player settings panel but found that the buttons are not clickable. I can still switch between display/privacy/storage.. panels using TAB and ENTER key. But for others like the "Enable hardware acceleration" checkbox this won't work. Anyone knows how to solve it? Thanks.

    Read the article

  • 'sudo su -' vs 'sudo -i' vs 'sudo /bin/bash' - when does it matter which is used, or does it matter at all?

    - by Paul
    When I'm doing something that requires root be typed in dozens of times in a row, I prefer to switch my session to a root session. In the various tutorials and instructions I have used on the Internet, I see sudo su, sudo su -, sudo -i and sudo /bin/bash being used to open a root session, but I'm not clear on the difference between these and when or if that difference matters. Can someone clear this up for me?

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • How can I unit test rendering output?

    - by stephelton
    I've been embracing Test-Driven Development (TDD) recently and it's had wonderful impacts on my development output and the resiliency of my codebase. I would like to extend this approach to some of the rendering work that I do in OpenGL, but I've been unable to find any good approaches to this. I'll start with a concrete example so we know what kinds of things I want to test; lets say I want to create a unit cube that rotates about some axis, and that I want to ensure that, for some number of frames, each frame is rendered correctly. How can I create an automated test case for this? Preferably, I'd even be able to write a test case before writing any code to render the cube (per usual TDD practices.) Among many other things, I'd want to make sure that the cube's size, location, and orientation are correct in each rendered frame. I may even want to make sure that the lighting equations in my shaders are correct in each frame. The only remotely useful approach to this that I've come across involves comparing rendered output to a reference output, which generally precludes TDD practice, and is very cumbersome. I could go on about other desired requirements, but I'm afraid the ones I've listed already are out of reach.

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • Two BULK INSERT issues I worked around recently

    - by AaronBertrand
    Since I am still afraid of SSIS, and because I am dealing mostly with CSV files and table structures that are relatively simple and require only one of the three letters in the acronym "ETL," I find myself using BULK INSERT a lot. I have been meaning to switch to using CLR, since I am doing a lot of file system querying using xp_cmdshell, but I haven't had the chance to really explore it yet. I know, a lot of you are probably thinking, wow, look at all those bad habits. But for every person thinking...(read more)

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Using RTL languages with MS Office in Wine 1.4

    - by saeed hardan
    I've installed MS Office 2007 in Ubuntu 12.04 using Wine 1.4 with no problems, and it works fine with the English Language. However, I need to use it to work with Arabic and Hebrew, and it doesn't work when I switch to a Hebrew or Arabic keyboard. The typing gets reversed. I saw an earlier post for something similar, but it is closed and I think it was for the earlier Wine 1.3. Supposedly Wine 1.4 has added RTL -- is there a way to get it working?

    Read the article

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • Why do my gvfs mounts not show up under ~/.gvfs?

    - by kynan
    From what I read, when mounting a network share via nautilus or gvfs-mount the mount point should be in ~/.gvfs. This seems not to be the case for me: I tried mounting both an FTP and SMB share via both nautilus and gvfs-mount under both Ubuntu Maverick and Natty and in none of the cases did I see any mount point under ~/.gvfs. I can access the shares just find in nautilus, but I want to have access via the command line, which is why I need a mount point in the file system. Edit: Debugging following James Henstridge's answer and enzotib's comment revealed that on my laptop gvfs-fuse-daemon is running and consequently gvfs mounts show up in ~/.gvfs, whereas on the 2 workstations where ~/.gvfs remained empty gvfs-fuse-daemon was not running. On all 3 machines there are other gvfs processes running: gvfsd, gvfs-afc-volume-monitor, ... On the laptop, mount | fgrep gvfs yields gvfs-fuse-daemon on /home/xxx/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=xxx) That raises the questions: How are shares mounted without gvfs-fuse-daemon running? Is there no mount point created in that case and is every access to the share a gvfs library call? Which daemon is responsible? gvfsd? What's the role of gvfs-fuse-daemon? Does it only create a fuse mount point in ~/.gvfs?

    Read the article

  • Variant Management– Which Approach fits for my Product?

    - by C. Chadwick
    Jürgen Kunz – Director Product Development – Oracle ORACLE Deutschland B.V. & Co. KG Introduction In a difficult economic environment, it is important for companies to understand the customer requirements in detail and to address them in their products. Customer specific products, however, usually cause increased costs. Variant management helps to find the best combination of standard components and custom components which balances customer’s product requirements and product costs. Depending on the type of product, different approaches to variant management will be applied. For example the automotive product “car” or electronic/high-tech products like a “computer”, with a pre-defined set of options to be combined in the individual configuration (so called “Assembled to Order” products), require a different approach to products in heavy machinery, which are (at least partially) engineered in a customer specific way (so-called “Engineered-to Order” products). This article discusses different approaches to variant management. Starting with the simple Bill of Material (BOM), this article presents three different approaches to variant management, which are provided by Agile PLM. Single level BOM and Variant BOM The single level BOM is the basic form of the BOM. The product structure is defined using assemblies and single parts. A particular product is thus represented by a fixed product structure. As soon as you have to manage product variants, the single level BOM is no longer sufficient. A variant BOM will be needed to manage product variants. The variant BOM is sometimes referred to as 150% BOM, since a variant BOM contains more parts and assemblies than actually needed to assemble the (final) product – just 150% of the parts You can evolve the variant BOM from the single level BOM by replacing single nodes with a placeholder node. The placeholder in this case represents the possible variants of a part or assembly. Product structure nodes, which are part of any product, are so-called “Must-Have” parts. “Optional” parts can be omitted in the final product. Additional attributes allow limiting the quantity of parts/assemblies which can be assigned at a certain position in the Variant BOM. Figure 1 shows the variant BOM of Agile PLM. Figure 1 Variant BOM in Agile PLM During the instantiation of the Variant BOM, the placeholders get replaced by specific variants of the parts and assemblies. The selection of the desired or appropriate variants is either done step by step by the user or by applying pre-defined configuration rules. As a result of the instantiation, an independent BOM will be created (Figure 2). Figure 2 Instantiated BOM in Agile PLM This kind of Variant BOM  can be used for „Assembled –To-Order“ type products as well as for „Engineered-to-Order“-type products. In case of “Assembled –To-Order” type products, typically the instantiation is done automatically with pre-defined configuration rules. For „Engineered- to-Order“-type products at least part of the product is selected manually to make use of customized parts/assemblies, that have been engineered according to the specific custom requirements. Template BOM The Template BOM is used for „Engineered-to-Order“-type products. It is another type of variant BOM. The engineer works in a flexible environment which allows him to build the most creative solutions. At the same time the engineer shall be guided to re-use existing solutions and it shall be assured that product variants of the same product family share the same base structure. The template BOM defines the basic structure of products belonging to the same product family. Let’s take a gearbox as an example. The customer specific configuration of the gearbox is influenced by several parameters (e.g. rpm range, transmitted torque), which are defined in the customer’s requirement document.  Figure 3 shows part of a Template BOM (yellow) and its relation to the product family hierarchy (blue).  Figure 3 Template BOM Every component of the Template BOM has links to the variants that have been engineeried so far for the component (depending on the level in the Template BOM, they are product variants, Assembly Variant or single part variants). This library of solutions, the so-called solution space, can be used by the engineers to build new product variants. In the best case, the engineer selects an existing solution variant, such as the gearbox shown in figure 3. When the existing variants do not fulfill the specific requirements, a new variant will be engineered. This new variant must be compliant with the given Template BOM. If we look at the gearbox in figure 3  it must consist of a transmission housing, a Connecting Plate, a set of Gears and a Planetary transmission – pre-assumed that all components are must have components. The new variant will enhance the solution space and is automatically available for re-use in future variants. The result of the instantiation of the Template BOM is a stand-alone BOM which represents the customer specific product variant. Modular BOM The concept of the modular BOM was invented in the automotive industry. Passenger cars are so-called „Assembled-to-Order“-products. The customer first selects the specific equipment of the car (so-called specifications) – for instance engine, audio equipment, rims, color. Based on this information the required parts will be determined and the customer specific car will be assembled. Certain combinations of specification are not available for the customer, because they are not feasible from technical perspective (e.g. a convertible with sun roof) or because the combination will not be offered for marketing reasons (e.g. steel rims with a sports line car). The modular BOM (yellow structure in figure 4) is defined in the context of a specific product family (in the sample it is product family „Speedstar“). It is the same modular BOM for the different types of cars of the product family (e.g. sedan, station wagon). The assembly or single parts of the car (blue nodes in figure 4) are assigned at the leaf level of the modular BOM. The assignment of assembly and parts to the modular BOM is enriched with a configuration rule (purple elements in figure 4). The configuration rule defines the conditions to use a specific assembly or single part. The configuration rule is valid in the context of a type of car (green elements in figure 4). Color specific parts are assigned to the color independent parts via additional configuration rules (grey elements in figure 4). The configuration rules use Boolean operators to connect the specifications. Additional consistency rules (constraints) may be used to define invalid combinations of specification (so-called exclusions). Furthermore consistency rules may be used to add specifications to the set of specifications. For instance it is important that a car with diesel engine always is build using the high capacity battery.  Figure 4 Modular BOM The calculation of the car configuration consists of several steps. First the consistency rules (constraints) are applied. Resulting from that specification might be added automatically. The second step will determine the assemblies and single parts for the complete structure of the modular BOM, by evaluating the configuration rules in the context of the current type of car. The evaluation of the rules for one component in the modular BOM might result in several rules being fulfilled. In this case the most specific rule (typically the longest rule) will win. Thanks to this approach, it is possible to add a specific variant to the modular BOM without the need to change any other configuration rules.  As a result the whole set of configuration rules is easy to maintain. Finally the color specific assemblies respective parts will be determined and the configuration is completed. Figure 5 Calculated Car Configuration The result of the car configuration is shown in figure 5. It shows the list of assemblies respective single parts (blue components in figure 5), which are required to build the customer specific car. Summary There are different approaches to variant management. Three different approaches have been presented in this article. At the end of the day, it is the type of the product which decides about the best approach.  For „Assembled to Order“-type products it is very likely that you can define the configuration rules and calculate the product variant automatically. Products of type „Engineered-to-Order“ ,however, need to be engineered. Nevertheless in the majority of cases, part of the product structure can be generated automatically in a similar way to „Assembled to Order“-tape products.  That said it is important first to analyze the product portfolio, in order to define the best approach to variant management.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >