Search Results

Search found 14052 results on 563 pages for 'response filter'.

Page 365/563 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Crowdsourcing MVVM Light Toolkit support

    - by Laurent Bugnion
    Considering the number of emails that are sent to me asking for support for MVVM Light toolkit, I find myself unable to answer all of them in sufficient time to make me feel good. In consequence, I started to send the following message in response to support queries, either per email or on the MVVM Light Codeplex discussion page. Hi, I am doing my best to answer all the questions as fast as possible. I receive a lot of them, however, and cannot reply to everyone fast enough to make me happy. Due to this, I would like to encourage you to post your question on StackOverflow, and tag it with the tag mvvm-light. StackOverflow is an awesome site where tons of developers help others with their technical question. http://stackoverflow.com/questions/tagged/mvvm-light I will monitor this tag on the StackOverflow website and do my best to answer questions. The advantage of StackOverflow over the Codeplex discussion is the sheer number of qualified developers able to help you with your questions, the visibility of the question itself, and the whole StackOverflow infrastructure (reputation, up- or down-vote, comments, etc) Thanks! Laurent Bug reports Regarding bug reports, feel free to continue to send them to the Codeplex site (preferred), or to me directly. I hope that this will help all support queries to be answered faster, and with the great quality for which the StackOverflow users are known!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Installing a VADTools design component into your 3CX Voice Application Designer toolbox

    - by ParadigmShift
    The 3CX Voice Application Designer is an innovative tool for creating IVR (Interactive Voice Response) Applications, or Voice Applications.  It is a familiar drag-and-drop experience that Visual Studio developers will get the hang of pretty quick. Additionally, there are new 3rd party components released by BlueVoice, that are distributed though www.UtahVoIPStore.com I thought I’d post a quick introduction to it, by showing how to install a component into you designer tool box.  In this example I am using the CommandLine component, which lets you call the command line from your voice application. First, copy the ZIP file that came with your component to the root folder of your VAD project. Now extract the zip file into the root directory. The component will be in the root directory and the Libraries directory will have a new DLL file. Open your VAD project and right-click on the project in project explorer to add the new component to your project. Navigate to the root folder of your project and select the new component. The component is now ready for you to use in your toolbox.

    Read the article

  • Silverlight Cream for May 30, 2010 -- #873

    - by Dave Campbell
    In this Issue: Matthias Shapiro, Colin Blair(-2-), Mike Snow, Marlon Grech, Victor Gaudioso. Shoutout: If you're going to be anywhere near Mission Viejo, California on June 19th, set your calendar for this Victor Gaudioso event: New Speaking Event: Microsoft Book Signing/Silverlight 4 Presentation SilverLaw has another example of his Flexible surface app up: Drag & Drop Flexible Surface - Silverlight 4 From SilverlightCream.com: Silverlight 4 Binding and StringFormat in XAML Matthias Shapiro has a discussion posted about StringFormat binding in Silverlight 4 ... he dug in hard on this... well worth a read. View Model Collection Properties for WCF RIA Services Colin Blair is discussing some possibilities for exposing collections of entities from the ViewModel... his favorite: PagedCollectionView. The next post discusses this deeper. Advanced Paged Collection View Colin Blair continues in more depth on the PagedCollectionView, this time handling paging, sorting, and multiple loads. Silverlight Tip of the day #25 – Detecting Validation Errors on Submit Mike Snow's latest Tip of the Day is up and is about validation - specifically validating after your user has pressed "OK" INotifyPropertyChanged… I am fed up of handling events just to know when a property changed Marlon Grech has an Rx-less solution to code notifications of properties changing... this is a WPF and Silverlight solution and all the code is downloadable. New Silverlight Video Tutorial: How to Add Multiple BitmapEffects to One Object Victor Gaudioso's latest outing is in response to a query from a reader and is a video tutorial showing how to add multiple bitmap effects to one object. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Big Data – Learning Basics of Big Data in 21 Days – Bookmark

    - by Pinal Dave
    Earlier this month I had a great time to write Bascis of Big Data series. This series received great response and lots of good comments I have received, I am going to follow up this basics series with further in-depth series in near future. Here is the consolidated blog post where you can find all the 21 days blog posts together. Bookmark this page for future reference. Big Data – Beginning Big Data – Day 1 of 21 Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21 Big Data – Evolution of Big Data – Day 3 of 21 Big Data – Basics of Big Data Architecture – Day 4 of 21 Big Data – Buzz Words: What is NoSQL – Day 5 of 21 Big Data – Buzz Words: What is Hadoop – Day 6 of 21 Big Data – Buzz Words: What is MapReduce – Day 7 of 21 Big Data – Buzz Words: What is HDFS – Day 8 of 21 Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21 Big Data – Buzz Words: What is NewSQL – Day 10 of 21 Big Data – Role of Cloud Computing in Big Data – Day 11 of 21 Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21 Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21 Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21 Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21 Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21 Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21 Big Data – Basics of Big Data Analytics – Day 18 of 21 Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21 Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21 Big Data – Final Wrap and What Next – Day 21 of 21 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Caching in the .NET Stack: Inside-Out

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/06/28/caching-in-the-.net-stack-inside-out.aspxI'm delighted to have my first course published on Pluralsight - Caching in the .NET Stack: Inside-out.   It's a pretty comprehensive look at caching in .NET solutions. The first half covers using local, remote and persistent cache stores inside the solution, including the .NET MemoryCache, NCache Express, AppFabric Caching, memcached, Azure Table Storage and local disk stores. The second half covers caching outside the solution in HTTP clients and proxies, and how to set up ASP.NET WebForms, MVC, Web API and WCF projects to use HTTP validation and expiration caching.   The course takes a hands-on approach, starting with a distributed solution that has no caching, analysing key points which can benefit from caching, and adding different types of cache. At the end of the course I run through a set of before and after performance tests, stressing the solution under load. Without caching and with 60 concurrent users the page response time maxes out at 18 seconds - with caching that falls to 2 seconds, so it's a huge improvement from very little effort. I’d be glad to hear feedback if you watch the course, especially if it’s as positive as my editor’s.

    Read the article

  • SyncToBlog #10 Lots of Azure and Cloud Links including MIX10 videos

    - by Eric Nelson
    Just getting a few interesting cloud links “down on paper”. I last did one of these on Azure in Feb 20010. Cloud Links: Article on Debugging in the Cloud http://code.msdn.microsoft.com/azurescale  A sample app that demonstrates monitoring and automatically scaling an Azure application in response to dropping performance etc. Basically a console app that checks perf stats and then uses the Service Management API to spin up new instances when needed. Azure In Action book is imminent :) Running Memcached in Windows Azure from the MS UK team Using Microsoft Codename Dallas as a data source for Drupal also from the MS UK team I often mention them – but this post is the biz! Metodi on fault and upgrade domains Detailed blog post on comparing Azure AppFabric Service Bus REST support to the free Faye Ruby+JavaScript gem that implements the JSON publish/subscribe protocol Bayeux. AppFabric LABS allow you to test out and play with experimental AppFabric technologies. Details of the upcoming VM support in Windows Azure Nice series of posts from J D Meier in the Patterns and Practice team How To Use ASP.NET Forms Auth with Azure Tables  How To Use ASP.NET Forms Auth with Roles in Azure Tables How To Use ASP.NET Forms Auth with SQL Server on Windows Azure And sessions from MIX10 held March 15th to 17th: Lap around the Windows Azure Platform – Steve Marx Building and Deploying Windows Azure Based Applications with Microsoft Visual Studio 2010 – Jim Nakashima Building PHP Applications using the Windows Azure Platform – Craig Kitterman, Sumit Chawla Using Ruby on Rails to Build Windows Azure Applications – Sriram Krishnan Microsoft Project Code Name “Dallas": Data for your apps – Moe Khosravy Using Storage in the Windows Azure Platform – Chris Auld Building Web Applications with Windows Azure Storage – Brad Calder Building Web Application with Microsoft SQL Azure – David Robinson Connecting Your Applications in the Cloud with Windows Azure AppFabric – Clemens Vasters Microsoft Silverlight and Windows Azure: A Match Made for the Web – Matt Kerner Something for everyone :)

    Read the article

  • General directions on developing a server side control system for JS/Canvas Action RPG

    - by Billy Ninja
    Well, yesterday I asked on anti-cheat JS, and confirmed what I kind of already knew that it's just not possible. Now I wanna measure roughly how hard it is to implement a server side checking that is agnostic to client input, that does not mess with the game experience so much. I don't wanna waste to much resource on this matter, since it's going to be initially a single player game, that I may or would like to introduce some kind of ranking, trading system later on. I'd rather deliver better more cool game features instead. I don't wanna have to guarantee super fast server response to keep the game going lag free. I'd rather go with more loose discrete control of key variables and instances. Like store user's action on a fifo buffer on the client, and push that actions to the server gradually. I'd love to see a elegant, generic solution that I could plug into my client game logic root (not having to scatter treatments everywhere in my client js) - and have few classes on Node.js server that could handle that - without having to mirror/describe all of my game entities a second time on the server.

    Read the article

  • Are there any font rendering libraries for games development that support hinting?

    - by Richard Fabian
    I've used angel code's bitmap font generator quite a bit and though it's very good, I wondered if there would be a way of using the hinting information to provide a better readable result by using hinting to provide differing thickness based on size/pixel coverage. I imagine any solution would have to use the distance field tech presented in the valve paper on smoothing fonts while maintaining or reducing asset size. (http://www.gamedev.net/community/forums/topic.asp?topic_id=494612) but I haven't found any demos of it being used with hinting information turned on or included in the field gradients in any way. Another way of looking at this is whether there are any font bitmap generators that will output mipmaps that still maintain their readability in the face of pixel size. I think the lower mip levels would try to guarantee fill and space where it is necessary to maintain readability/topology over maintaining style/form (the point of hinting). In response to "Is there a reason you can't just render the size you want", the problem lies in the fact that font rasterisers currently don't render in 3D, and hinting information would be important in different amounts due to the pixel density being different along different axes, even differing in importance along the length of a string due to the size reducing over distance. For example, I only want horizontal hinting in a texture that is viewed from the side, and only really want vertical hinting in a font that is viewed from below or above. This isn't meant to be a renderer that tries to render a perfect outline as accurately as possible, as hinting distorts the reality of the font, instead this is meant to be a rendering solution for quite static scenes, but scenes that have 3D transformed and warped text layout. In this case the legibility is important, more important than the accuracy of representation of the polygon shape.

    Read the article

  • links for 2010-04-12

    - by Bob Rhubart
    Andy Mulholland: We need innovation! What does that mean? "The most common response would seem to be ‘I will know it when I see it’, which suggests business success is based on ‘getting lucky’. As you might expect business schools don’t agree with this and as A G Lafley, author of several works on the topic comments: 'Innovation is risky, but it’s not random. Innovators have a disciplined invention process.'" Capgemini CTO blogger Andy Mulholland. (tags: entarch enterprisearchitecture innovation) @eelzinga: lEAI/Oracle Service Bus testing with Citrus Framework, part2 IT-Eye's Eric Elzinga continues his series with a test of a scenario that is part of a customer's middleware architecture. (tags: oracle otn ESB soa citrus) @fteter: Collaborate 10 - What Looks Good To Me Oracle ACE Director Floyd Teter from NASA's JPL shares quick previews of his Collaborate 10 presentations, along with a list of some sessions he plans to attend. (tags: oracle otn oracleace collaborate2010) Mark Rittman: OWB11gR2 for Windows Now Available Oracle ACE Director Mark Rittman of Rittman Mead shares insight on the recent Oracle Warehouse Builder release, along with a list of articles on the new features in Oracle Database 11gR2. (tags: oracle otn datewarehousing businessintelligence 11gr2)

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • Rackspace Cloud Servers in Europe?

    - by mit
    We have setup a cloud virtual server at rackspace in the US, but we use it from Europe. I found out I am not quite happy with the response time. Of course I knew that there would be some latency. But I am not sure if it is the overseas latency (ping is 120ms) or also the minimal resources. It is the smallest machine, 256 MB, 10 GB, running a Mediawiki on Ubuntu 10.04 64 bit. The Instance lives in the rackspace ORD1 datacenter. As soon as they have opened their new facilities in the UK we plan moving the incstance there. But we are planing more machines already. The pricing is quite attractive. I don't really want to do some measuring and benchmarking and this stuff, so I am asking just for your opinions and it would be nice to hear what you can tell from your experience. Maybe someone who uses such small instances in the US. And what can we really expect if we upgrade to more resources.

    Read the article

  • Storage Forum at Oracle OpenWorld

    - by kgee
    For anyone attending Oracle OpenWorld and involved in Storage, join us at the Storage Forum & Reception. This special engagement offers you the ability to meet Oracle’s top storage executives, architects and fellow storage colleagues. Features include interactive sessions and round-table discussions on Oracle's storage strategy, product direction, and real-world customer implementations. It’s your chance to ask questions and learn first-hand about Oracle's response to top trends and what keeps storage managers up at night, including how to contain storage costs, improve performance, and ensure seamless integration with Oracle software environments. Featured Speakers: Mike Workman, SVP of Pillar Axiom Storage Group; Phil Bullinger, SVP of Sun ZFS Storage Group; and Jim Cates, VP of Tape Systems Storage Group Added Bonus: The Storage Forum will be followed by an exclusive Wine and Cocktail Reception where you can... Meet and network with peers, and other storage professionals Interact with Oracle’s experts in a fun and relaxed setting Wind down and prepare for the Oracle Customer Appreciation Event featuring Pearl Jam and Kings of Leon Date & Times:Wednesday, October 3, 20123:30 – 5:00 p.m. Forum 5:00 – 7:00 p.m. Reception Disclaimer: Space is limited, so register at http://bit.ly/PULcyR as soon as possible! If you want any more information, feel free to email [email protected]

    Read the article

  • How do I make changes to /proc/acpi/wakeup permanent?

    - by Jolan
    I had a problem with my Ubuntu 12.04 waking up immediately after going into suspend. I solved the problem by changing the settings in /proc/acpi/wakeup, as suggested in this question: How do I prevent immediate wake up from suspend?. After changing the settings, the system goes flawlessly into suspend and stays suspended, but after I wake it back up, the settings in /proc/acpi/wakeup are different from what I set them to. Before going to suspend: cat /proc/acpi/wakeup Device S-state Status Sysfs node SMB0 S4 *disabled pci:0000:00:03.2 PBB0 S4 *disabled pci:0000:00:09.0 HDAC S4 *disabled pci:0000:00:08.0 XVR0 S4 *disabled pci:0000:00:0c.0 XVR1 S4 *disabled P0P5 S4 *disabled P0P6 S4 *disabled pci:0000:00:15.0 GLAN S4 *enabled pci:0000:03:00.0 P0P7 S4 *disabled pci:0000:00:16.0 P0P8 S4 *disabled P0P9 S4 *disabled USB0 S3 *disabled pci:0000:00:04.0 USB2 S3 *disabled pci:0000:00:04.1 US15 S3 *disabled pci:0000:00:06.0 US12 S3 *disabled pci:0000:00:06.1 PWRB S4 *enabled SLPB S4 *enabled I tell the system to suspend, and it works as it should. But later after waking it up, the settings are changed to either: USB0 S3 *disabled pci:0000:00:04.0 USB2 S3 *enabled pci:0000:00:04.1 US15 S3 *disabled pci:0000:00:06.0 US12 S3 *enabled pci:0000:00:06.1 or USB0 S3 *enabled pci:0000:00:04.0 USB2 S3 *enabled pci:0000:00:04.1 US15 S3 *enabled pci:0000:00:06.0 US12 S3 *enabled pci:0000:00:06.1 Any ideas? Thank you for your response. Unfortunately it did not solve my problem. all of /sys/bus/usb/devices/usb1/power/wakeup /sys/bus/usb/devices/usb2/power/wakeup /sys/bus/usb/devices/usb3/power/wakeup /sys/bus/usb/devices/usb4/power/wakeup as well as /sys/bus/usb/devices/3-1/power/wakeup are set to disabled, and the notebook still wakes up by itself right after going to sleep. The only thing it seems to react to are the settings in /proc/acpi/wakeup, which keep changing (resetting) every time i power off/restart my notebook.

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • VS2012 - How to manually convert .NET Class Library to a Portable Class Library

    - by Igor Milovanovic
    The portable libraries are the  response to the growing profile fragmentation in .NET frameworks. With help of portable libraries you can share code between different runtimes without dreadful #ifdef PLATFORM statements or even worse “Add as Link” source file sharing practices. If you have an existing .net class library which you would like to reference from a different runtime (e.g. you have a .NET Framework 4.5 library which you would like to reference from a Windows Store project), you can either create a new portable class library and move the classes there or edit the existing .csproj file and change the XML directly. The following example shows how to convert a .NET Framework 4.5 library to a Portable Class Library. First Unload the Project and change the following settings in the .csproj file: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> to: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\Portable \$(TargetFrameworkVersion)\Microsoft.Portable.CSharp.targets" /> and add the following keys to the first property group in order to get visual studio to show the framework picker dialog: <ProjectTypeGuids>{786C830F-07A1-408B-BD7F-6EE04809D6DB}; {FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>   After that you can select the frameworks in the Library Tab of the Portable Library:   As last step, delete any framework references from the library as you have them already referenced via the .NET Portable Subset.     [1] Cross-Platform Development with the .NET Framework - http://msdn.microsoft.com/en-us/library/gg597391.aspx [2] Framework Profiles in .NET: http://nitoprograms.blogspot.de/2012/05/framework-profiles-in-net.html

    Read the article

  • No endpoint listening at.........

    - by Michael Stephenson
    I was having some very frustrating behaviour on our build server and while I found a number of articles online with similar error messages none of them helped me.  I thought I would just explain this here incase if helps me or anyone else in future.The error message we were getting is:There was no endpoint listening at http://localhostStubs.ExternalApplication/SampleService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more detailsOur scenario is as follows:We have a solution where a WCF service application hosting the WCF routing service is listening to the Windows Azure Service Bus Relay.  We have an acceptance test project in the solution which sends a message to the service bus which is then received by the WCF routing service and routed to SampleService.svc which is hosted in another IIS application on the same box.  A response is flowed back through to the test.  In the tests there are 5 scenarios simulating a successful message, and various error conditions.  On my developer machine it was working absolutely fine every time, and a clean build on my developer machine worked fine.  On the build server however one or more of the tests would fail each time with the above error message.  There didnt seem to be any pattern to which test would fail.The solution was building on a Windows 2008 R2 machine with IIS 7 and AppFabric Server installed with auto-start configured for the IIS Application which would be listening to service bus.After lots of searching online and looking at logs etc it turned out to be a simple solution to just restart the WAS service (Windows Process Activation Service) and the services it advised you to restart with it.  Hope this helps someone else

    Read the article

  • SQL SERVER – How to Install SQL Server 2014 – A 99 Seconds Video

    - by Pinal Dave
    Last month I presented at 3 community and 5 corporate events. Every single time I have been asked by others what is my experience with SQL Server 2014. Every single time I have told the audience that they should try this out themselves, however, the response has been very lukewarm. Everybody wants to know how SQL Server 2014 works, but no one wants to try out themselves. Upon asking why users are not installing SQL Server 2014, pretty much the same answer I received from everyone – “The Fear of Uknown”. Everybody who have not installed SQL Server 2014 are not sure how the installation process works and what if they face any issue while installing SQL Server 2014. If you have installed an earlier version of SQL Server, installing SQL Server 2014 is very easy process. I have created a quick video of 99 seconds where I explain how we can easily install SQL Server 2014. This is a straight forward default installation of SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Video

    Read the article

  • How to install Canon MP610 printer on Ubuntu 12.04 x64

    - by Arkadius
    I installed Ubuntu 12.04 x64. Orginal Canon drivers are only for 32-bit version. How can I install this printer in 64-bit version ? Arkadius HERE IS SOLUTION I looked for solution some time and finally found it. First I try to do it by adding repository like it is written here: http://www.iheartubuntu.com/2012/02/install-canon-printer-for-ubuntu-linux.html BUT it did NOT work. Printer was installed but every print JOB goes somewhere ( probably to /dev/null :) ) Also installing sudo apt-get install ia32-libs did NOT worked (it was already installed) Finally I found solution. NOTE I did NOT use orginal Canon drivers for 32-bit. I also removed drivers from repository: ppa:michael-gruz/canon I found solution almost at the end of this thread: http://ubuntuforums.org/showthread.php?t=1967725&page=10 Most important hint was found in Response #97 "Do NOT install any PPA" I did as follows: Removed all copies of my printer Removed Canon drivers from repository ppa:michael-gruz/canon sudo apt-get remove cnijfilter* Added new repository and installed CUPS for Canon: sudo apt-add-repository ppa:robbiew/cups-bjnp sudo apt-get update sudo apt-get install cups-bjnp Installed Gutenprint: sudo apt-get install printer-driver-gutenprint Restarted CUPS: sudo restart cups Add myself to group lp: sudo usermod -G lp -a your_user_name Added printer usings steps from link above: Don't install any PPA for the drivers. Click the Cog up in the right-hand corner and select Printers. Turn on the printer and make sure it is connected. When the Printers windows appears, click +Add and wait a few minutes. Your printer should appear within the configuration wizard. Mine did and its an Canon MX330. Click the defaults and continue on. Cups should identify your printer. I saw a few other models in the list. I was able to successfully print a test page afterwards. I hope this will also help someone else. Arkadius .

    Read the article

  • Forwarding a subdomain to main domain using Godaddy.

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • What You Said: Your Favorite Co-Op Games

    - by Jason Fitzpatrick
    While competitive gaming is fun, reader response to this week’s Ask the Readers question shows that good old beat-the-bad-guys-together cooperative gaming is as popular as ever. Read on to see what your fellow readers are playing. By far the most popular nomination for favorite co-op game was an outright classic: 1987′s smash hit Contra. Originally released as an arcade game, it was ported to the Nintendo Entertainment System in 1988. Contra was groundbreaking for the time as it featured simultaneous play for the two players–you and a friend could play side by side without waiting to take your turn. Clearly that kind of side-by-side play resonated with readers. RJ writes: When my fiance and I played and beat Contra on the NES. I knew she was the one and we got married and its been great. That’s no small feat; Contra was voted “Toughest Game to Beat” by IGN.com readers. Even readers who had moved on to newer games still recall Contra fondly; Jami writes: The Gears of War trilogy on 360 is my favorite co-op currently, although I do have fond memories of bonding with my brother playing some co-op Contra on the NES. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >