Search Results

Search found 676 results on 28 pages for 'mappings'.

Page 6/28 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to look up an NHibernate entity's table mapping from the type of the entity?

    - by snicker
    Once I've mapped my domain in NHibernate, how can I reverse lookup those mappings somewhere else in my code? Example: The entity Pony is mapped to a table named "AAZF1203" for some reason. (Stupid legacy database table names!) I want to find out that table name from the NH mappings using only the typeof(Pony) because I have to write a query elsewhere. How can I make the following test pass? private const string LegacyPonyTableName = "AAZF1203"; [Test] public void MakeSureThatThePonyEntityIsMappedToCorrectTable() { string ponyTable = GetNHibernateTableMappingFor(typeof(Pony)); Assert.AreEqual(LegacyPonyTableName, ponyTable); } In other words, what does the GetNHibernateTableMappingFor(Type t) need to look like?

    Read the article

  • Disable Foreign Key Exposure in Entity Framework?

    - by davemackey
    When I originally created a Dynamic Data project I told it to expose the foreign keys, but now I can't make mappings between two entities b/c of the foreign keys. When I click on mapping details while focused on my association I receive the message: Mappings are not allow for an association over exposed foreign keys. So I'd like to disable the exposure of the foreign keys but am unsure how to do this without creating a new Entity Model from scratch. I'm not far along - so that wouldn't be hard, but I imagine there must be a programmatic switch for this?

    Read the article

  • Database migration

    - by graham.reeds
    We are migrating a client's own database schema to our own (both SQL-Server). Most of the mappings from their schema to ours have been indentified and rules been agreed on if the columns don't exactly align (default values etc.) Previously, depending on who was assigned the task, this has been done either with a mixture of sql scripts or one-off vb apps. I was thinking there must be a application (commercial or otherwise) where you can assign these mappings/rules and have it stream the data across. Surely the setting up and configuration of this tool would be less than the creation of ad-hoc scripts... Is there an app? Apart from the obvious 'be careful' any tips to mitigate the stress of a non-DBA porting one schema to another?

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • SharePoint 2010 Hosting :: Error – HTTP Error 401.1 when Accessing Your SharePoint 2010 Site

    - by mbridge
    When attempting to view a MOSS (SharePoint) 2007 or SharePoint 2010 site locally from a Web Front End (WFE) you get an error stating: “HTTP Error 401.1 – Unauthorized: Access is denied due to invalid credentials.” I have noticed that this happens on Windows 2003/2008 Server SP1/SP2/R2 when using Host Headers and Alternate Access Mappings on a web application in MOSS 2007. If you can access the site from remote machines and cannot access the site from the server itself, then this might be your issue. For all my newer farm installs this includes SharePoint 2007 (MOSS) and SharePoint 2010. I use method number 2 on all SharePoint and SQL Servers in the farm. If you cannot access the web site locally or remotely from other machines then there is an issue with security on the site and/or possibly a Kerberos related security issue I implemented fix #2 listed in the following Microsoft KB Article. I implemented this fix on all servers in the MOSS 2007 Farm (WFE’s and Indexing/Search Server). If using method 1, you would add all Host Headers and Alternate Access Mappings for all web applications to the BackConnectionHostNames value, then you will be able to access the sites locally from the WFE’s. Microsoft KB Link: http://support.microsoft.com/kb/896861 Method 1: Specify Host Names Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK. 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 3. Right-click MSV1_0, point to New, and then click Multi-String Value. 4. Type BackConnectionHostNames, and then press ENTER. 5. Right-click BackConnectionHostNames, and then click Modify. 6. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click OK. 7. Quit Registry Editor, and then restart the IISAdmin service. Method 2: Disable the Loopback Check  Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa 3. Right-click Lsa, point to New, and then click DWORD Value. 4. Type DisableLoopbackCheck, and then press ENTER. 5. Right-click DisableLoopbackCheck, and then click Modify. 6. In the Value data box, type 1, and then click OK. 7. Quit Registry Editor, and then restart your computer. Give it try and good luck.

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco

    - by stephen.schleifer
    First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco | June 23-25 This course enables participants to use Oracle Imaging and Process Management (I/PM) 11g to access, track, and annotate documents. In addition, they also get an overview of the product architecture of Oracle I/PM running on Oracle WebLogic Server.The course also delves into administration tasks such as security permissions, configuration such as creating BPEL connections, and procedures for creating applications, searches, and input mappings. Customer and partners can register by looking up the course (#D61575GC10) on http://education.oracle.com

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Kicking off the ODI12c Blog Series

    - by Madhu Nair
    Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 It is always exciting to talk about a new release, especially one as significant as the newly released Oracle Data Integrator 12c (ODI12c). Why? Because it is packed with features that addresses many requirements for the user community. If you missed sneak previews at this year's Oracle Open World sessions, do not despair. Because over the coming weeks the ODI12c team of developers and consultants will be sharing their perspective on key features, experiences and best practices for ODI12c right here through a series of blogs. Before diving into feature details in subsequent blogs it helps to understand the overall themes that went into developing ODI12c. Let the Productivity Flow: Let us face it. Designing for developer user experience is always top of mind to any enterprise software. ODI12c addresses this through the introduction of declarative flow based mappings (the topic of our next ODI blog by the way!!). Reusability has been addressed though the introduction of reusable mappings cutting down development times for repeated logics. An enhanced debugger makes life easy for complex granular debugging scenarios. Unique repository IDs now allow you to manage multiple repositories. Performance is Paramount: Another major area of focus for ODI12c is performance. Increased parallelism (like the multiple target table load feature), reduced session overheads and ability to customize loads plans through physical views all empower the user to tune run times for extreme performances. mapping showing multiple target load physical representation allowing users to choose execution options Integrating it all: This release is not just about ODI12c as a standalone product. Closer integration with Oracle GoldenGate now brings Change Data Capture (CDC) capabilities into ODI12c. Oracle Warehouse Builder (OWB) jobs can now be executed and monitored from within ODI12c. And ODI12c is fast becoming the de facto standard for Oracle Applications that need data integration in their solutions. The best example being the latest release of the Oracle BI Applications technology. Even as we bring you in-depth write-ups about the features there are some great previews and resources that are already out there. Like this super entry by beta partner Rittman Mead Consulting and this ODI12c Key Features White Paper. You can download ODI12c here (this post helps). The best though is the upcoming Executive Webcast featuring customers and executives who have seen and conceived the product. Don’t miss it!

    Read the article

  • Does redirecting old site's URLs to new site's front page hurt a page's ranking?

    - by Kaivosukeltaja
    An old site that is being rewritten needs to have it's URLs redirected to the new site. There are a few hundred pages that may or may not have corresponding pages on the new site, probably with different slugs, and adding mappings manually will require more hours than we can spare. It was suggested that all old URLs be redirected to the new front page, but I remember reading somewhere that this confers a penalty in page rank because it's what link farmers do. Is this true or can we take the easy way out?

    Read the article

  • IIS 7.5 How to disable "Verify File Exists" for siteminder handler

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • Synergy Key Mapping

    - by Tauren
    I'm running a Synergy server on Ubuntu and a Synergy+ client on OSX. The server has a standard windows keyboard with shift, ctrl, windows, and alt keys. My MacBookPro has shift, fn, control, alt/option, and command keys. When I press ctrl-c, ctrl-v, etc, the appropriate copy/paste action doesn't happen on the Mac, but it does in Ubuntu. If I'm controlling the mac, and press alt-c, alt-v, then I get the copy/paste action. So I played around with key mapping in synergy.conf and found that the following allows me to do copy/paste with ctrl-c/ctrl-v: section: screens godzilla: mbp.local: ctrl = alt alt = ctrl end Is this all that I need to do? Or are there other mappings that will help as well? The synergy configuration page refers to the following key mappings. What are the equivalent keys for each of these on the Windows keyboard and Mac keyboard? What is a meta or super key? shift = {shift|ctrl|alt|meta|super|none} ctrl = {shift|ctrl|alt|meta|super|none} alt = {shift|ctrl|alt|meta|super|none} meta = {shift|ctrl|alt|meta|super|none} super = {shift|ctrl|alt|meta|super|none} Thanks!

    Read the article

  • Force netsh/arp binding multicast IP addres with specific MAC address

    - by Olivier
    I would like to setup an binding from an IP address to a MAC address using netsh. Goal is to bond an IP address which is a multicast address (224.224.x.y) to a given MAC address (which is NOT the calculated one from the multicast IP address : 01:00:5e:X:Y:Z It used to work with Windows XP (was it a bug that used to be "perfect" for my needs?), but Windows 7/8/8.1 forces the MAC address to the calculated one instead of letting me put what I want! (http://nettools.aqwnet.com/ipmaccalc/ipmaccalc.php shows MAC address calculation for multicast IP address) Thus I'm doing the following. Listing existing mappings: netsh.exe interface ip show neighbors "Ethernet" Interface 12 : Ethernet Internet address Physical address Type 224.0.0.22 01-00-5e-XX-YY-ZZ static Then adding my interface mapping manually: netsh.exe interface ip add neighbors "Ethernet" "224.xxx.yyy.zzz" "00-80-EE-UU-VV-WW" Finally, listing again my mappings: netsh.exe interface ip show neighbors "Ethernet" Interface 12 : Ethernet Internet address Physical address Type 224.0.0.22 01-00-5e-XX-YY-ZZ static **224.xxx.yyy.zzz 01-00-5e-UU-VV-WW static** As you can see, the MAC Address of the second entry (the one I just made) has been dynamically replaced by the calculated MAC Address corresponding to my IP Address... Calculation is done as follow (and displayed in hexa): UU=(xxx-128) VV=yyy WW=zzz But I don't want that behavior. My IP address and MAC address cannot be changed, and I must associate them accurately. Does anybody know how to disable MAC address substitution/calculation in netsh? Thanks, Olivier.

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • IIS 7.5 Siteminder Does not protect ASP.net MVC requests

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • IIS 7.5 Siteminder is not protecting ASP.net MVC requests

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • Postfix: Modify sender address based on recipient

    - by PJ P
    We have a Postfix server that receives mail from our application servers. Senders are in the form [email protected] (where host.fqdn can vary, depending on source server) and recipients can be internal or external users. Messages going to external users should have the sender changed to [email protected]. I have tried using canonical maps, but since that is handled by the cleanup daemon, before any transport decisions are made, it would affect all sender addresses. I have also tried creating a custom smtp transport with generic mappings and configuring transport_maps to use that custom smtp transport for external domains. However, generic mappings affect both sender and recipient addresses. Lastly, I've tried the following: Create a custom smtpd daemon that specifies sender canonical maps and a unique transport table. Send all externally addressed mail to that custom daemon. Ideally, sender canonical maps would transform the sender address and the unique transport table would relay messages to the internet. However, evidently, only one transport table can be used per Postfix instance. I want to avoid creating an entirely new Postfix instance to accommodate this rewriting. Any suggestions? (and thanks in advance)

    Read the article

  • Administrator view all mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. ServerFault has an answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • Administrator view ALL mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. I have seen this answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • Synergy Key Mapping

    - by Tauren
    I'm running a Synergy server on Ubuntu and a Synergy+ client on OSX. The server has a standard windows keyboard with shift, ctrl, windows, and alt keys. My MacBookPro has shift, fn, control, alt/option, and command keys. When I press ctrl-c, ctrl-v, etc, the appropriate copy/paste action doesn't happen on the Mac, but it does in Ubuntu. If I'm controlling the mac, and press alt-c, alt-v, then I get the copy/paste action. So I played around with key mapping in synergy.conf and found that the following allows me to do copy/paste with ctrl-c/ctrl-v: section: screens godzilla: mbp.local: ctrl = alt alt = ctrl end Is this all that I need to do? Or are there other mappings that will help as well? The synergy configuration page refers to the following key mappings. What are the equivalent keys for each of these on the Windows keyboard and Mac keyboard? What is a meta or super key? shift = {shift|ctrl|alt|meta|super|none} ctrl = {shift|ctrl|alt|meta|super|none} alt = {shift|ctrl|alt|meta|super|none} meta = {shift|ctrl|alt|meta|super|none} super = {shift|ctrl|alt|meta|super|none} Thanks!

    Read the article

  • Howto setup neocomplcache?

    - by eddy
    I just started using vim and saw a cool plugin: [neocomplcache].(http://www.vim.org/scripts/script.php?script_id=2620) My Problem is, that I can't get it to work properly. After installing, I took the example config from the help files of neocomplcache and added the lines to my .vimrc At first I wanted to create a simple latex file (there are snippets for tex). After typing "begi" there appears a menu, I can choose between the snippets with TAB or <C-n>. But how do I get them to expand? <C-k> does not work, but I don't understand why. ======== .vimrc: ======== .... " Plugin key-mappings. imap <C-k> <Plug>(neocomplcache_snippets_expand) smap <C-k> <Plug>(neocomplcache_snippets_expand) inoremap <expr><C-g> neocomplcache#undo_completion() inoremap <expr><C-l> neocomplcache#complete_common_string() " Recommended key-mappings. " <CR>: close popup and save indent. inoremap <expr><CR> neocomplcache#smart_close_popup() ."\<CR>" " <TAB>: completion. inoremap <expr><TAB> pumvisible() ? "\<C-n>" : "\<TAB>" " <C-h>, <BS>: close popup and delete backword char. inoremap <expr><C-h> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><BS> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><C-y> neocomplcache#close_popup() inoremap <expr><C-e> neocomplcache#cancel_popup() ...

    Read the article

  • Have .cmd in startup wait for system wide startup to run.

    - by Dan
    On a computer I'm not an administrator on, there is a startup file (.cmd file in C:\Documents and Settings\All Users\Start Menu\Programs\Startup) for all users. It does several things I don't like, so I have created my own .cmd file which I have placed in C:\Documents and Settings\[user]\Start Menu\Programs\Startup. I want my code to run after the system wide, because my program first undoes the network mappings the system wide file did, and then replaces it with my own network mappings. How can I make my program wait and start as soon as the system wide one is done? (I cannot remove or edit the system wide file). Thanks EDIT: The time that the system wide file takes to runs varies and I would like my file to run right after. The "If Exists" method seems a little to contingent since the system wide script can change or a file could be moved. I am hoping to give my script to a few of my coworkers, so hoping to have it work without me having to update anything. Also, being a linux guy, I don't know cmd code, so please write out any coding suggestions.

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • How to deploy jBPM 3.2.2 console on Oracle 10g iAS

    - by Balint Pato
    Hi! Does anybody have experience regarding deployment of the jBPM Administration Console on Oracle 10g iAS? I successfully deployed it using an .ear, security mappings working, I can even login to the console, Hibernate finds the JNDI datasource but it cannot find the TransactionManager. I see no log, only the exception thrown in the jsf page: Can anybody help me? The hibernate.cfg.xml file now looks like this: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <!-- hibernate dialect --> <property name="hibernate.dialect">org.hibernate.dialect.Oracle9Dialect</property> <!-- JDBC connection properties (begin) === <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="hibernate.connection.url">jdbc:hsqldb:mem:jbpm</property> <property name="hibernate.connection.username">sa</property> <property name="hibernate.connection.password"></property> ==== JDBC connection properties (end) --> <property name="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</property> <!-- DataSource properties (begin) --> <property name="hibernate.connection.datasource">java:/JbpmDS</property> <!-- DataSource properties (end) --> <!-- JTA transaction properties (begin) --> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</property> <!-- <property name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</property>--> <!-- JTA transaction properties (end) --> <!-- CMT transaction properties (begin) === <property name="hibernate.transaction.factory_class">org.hibernate.transaction.CMTTransactionFactory</property> <property name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</property> ==== CMT transaction properties (end) --> <!-- logging properties (begin) --> <property name="hibernate.show_sql">true</property> <property name="hibernate.format_sql">true</property> <property name="hibernate.use_sql_comments">true</property> <--==== logging properties (end) --> <!-- ############################################ --> <!-- # mapping files with external dependencies # --> <!-- ############################################ --> <!-- following mapping file has a dependendy on --> <!-- 'bsh-{version}.jar'. --> <!-- uncomment this if you don't have bsh on your --> <!-- classpath. you won't be able to use the --> <!-- script element in process definition files --> <mapping resource="org/jbpm/graph/action/Script.hbm.xml"/> <!-- following mapping files have a dependendy on --> <!-- 'jbpm-identity.jar', mapping files --> <!-- of the pluggable jbpm identity component. --> <!-- Uncomment the following 3 lines if you --> <!-- want to use the jBPM identity mgmgt --> <!-- component. --> <!-- identity mappings (begin) --> <mapping resource="org/jbpm/identity/User.hbm.xml"/> <mapping resource="org/jbpm/identity/Group.hbm.xml"/> <mapping resource="org/jbpm/identity/Membership.hbm.xml"/> <!-- identity mappings (end) --> <!-- following mapping files have a dependendy on --> <!-- the JCR API --> <!-- jcr mappings (begin) === <mapping resource="org/jbpm/context/exe/variableinstance/JcrNodeInstance.hbm.xml"/> ==== jcr mappings (end) --> <!-- ###################### --> <!-- # jbpm mapping files # --> <!-- ###################### --> <!-- hql queries and type defs --> <mapping resource="org/jbpm/db/hibernate.queries.hbm.xml" /> <!-- graph.action mapping files --> <mapping resource="org/jbpm/graph/action/MailAction.hbm.xml"/> <!-- graph.def mapping files --> <mapping resource="org/jbpm/graph/def/ProcessDefinition.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Node.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Transition.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Event.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Action.hbm.xml"/> <mapping resource="org/jbpm/graph/def/SuperState.hbm.xml"/> <mapping resource="org/jbpm/graph/def/ExceptionHandler.hbm.xml"/> <mapping resource="org/jbpm/instantiation/Delegation.hbm.xml"/> <!-- graph.node mapping files --> <mapping resource="org/jbpm/graph/node/StartState.hbm.xml"/> <mapping resource="org/jbpm/graph/node/EndState.hbm.xml"/> <mapping resource="org/jbpm/graph/node/ProcessState.hbm.xml"/> <mapping resource="org/jbpm/graph/node/Decision.hbm.xml"/> <mapping resource="org/jbpm/graph/node/Fork.hbm.xml"/> <mapping resource="org/jbpm/graph/node/Join.hbm.xml"/> <mapping resource="org/jbpm/graph/node/MailNode.hbm.xml"/> <mapping resource="org/jbpm/graph/node/State.hbm.xml"/> <mapping resource="org/jbpm/graph/node/TaskNode.hbm.xml"/> <!-- context.def mapping files --> <mapping resource="org/jbpm/context/def/ContextDefinition.hbm.xml"/> <mapping resource="org/jbpm/context/def/VariableAccess.hbm.xml"/> <!-- taskmgmt.def mapping files --> <mapping resource="org/jbpm/taskmgmt/def/TaskMgmtDefinition.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/def/Swimlane.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/def/Task.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/def/TaskController.hbm.xml"/> <!-- module.def mapping files --> <mapping resource="org/jbpm/module/def/ModuleDefinition.hbm.xml"/> <!-- bytes mapping files --> <mapping resource="org/jbpm/bytes/ByteArray.hbm.xml"/> <!-- file.def mapping files --> <mapping resource="org/jbpm/file/def/FileDefinition.hbm.xml"/> <!-- scheduler.def mapping files --> <mapping resource="org/jbpm/scheduler/def/CreateTimerAction.hbm.xml"/> <mapping resource="org/jbpm/scheduler/def/CancelTimerAction.hbm.xml"/> <!-- graph.exe mapping files --> <mapping resource="org/jbpm/graph/exe/Comment.hbm.xml"/> <mapping resource="org/jbpm/graph/exe/ProcessInstance.hbm.xml"/> <mapping resource="org/jbpm/graph/exe/Token.hbm.xml"/> <mapping resource="org/jbpm/graph/exe/RuntimeAction.hbm.xml"/> <!-- module.exe mapping files --> <mapping resource="org/jbpm/module/exe/ModuleInstance.hbm.xml"/> <!-- context.exe mapping files --> <mapping resource="org/jbpm/context/exe/ContextInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/TokenVariableMap.hbm.xml"/> <mapping resource="org/jbpm/context/exe/VariableInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/ByteArrayInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/DateInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/DoubleInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/HibernateLongInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/HibernateStringInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/LongInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/NullInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/StringInstance.hbm.xml"/> <!-- job mapping files --> <mapping resource="org/jbpm/job/Job.hbm.xml"/> <mapping resource="org/jbpm/job/Timer.hbm.xml"/> <mapping resource="org/jbpm/job/ExecuteNodeJob.hbm.xml"/> <mapping resource="org/jbpm/job/ExecuteActionJob.hbm.xml"/> <!-- taskmgmt.exe mapping files --> <mapping resource="org/jbpm/taskmgmt/exe/TaskMgmtInstance.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/exe/TaskInstance.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/exe/PooledActor.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/exe/SwimlaneInstance.hbm.xml"/> <!-- logging mapping files --> <mapping resource="org/jbpm/logging/log/ProcessLog.hbm.xml"/> <mapping resource="org/jbpm/logging/log/MessageLog.hbm.xml"/> <mapping resource="org/jbpm/logging/log/CompositeLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ActionLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/NodeLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ProcessInstanceCreateLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ProcessInstanceEndLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ProcessStateLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/SignalLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/TokenCreateLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/TokenEndLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/TransitionLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableCreateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableDeleteLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/ByteArrayUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/DateUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/DoubleUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/HibernateLongUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/HibernateStringUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/LongUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/StringUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskCreateLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskAssignLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskEndLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/SwimlaneLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/SwimlaneCreateLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/SwimlaneAssignLog.hbm.xml"/> </session-factory> </hibernate-configuration> ---- edit --- I have already tried the hibernate.transaction.manager_lookup_class to set to the JBoss version (org.hibernate.transaction.JBossTransactionManagerLookup) it did not work...well it's not that suprising...I'll try now: org.hibernate.transaction.OC4JTransactionManagerLookup I tried with CMT instead of JTA, but it didn't work also.

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >