Search Results

Search found 4460 results on 179 pages for 'uninitialized proxy'.

Page 157/179 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • Get Pop Up Notifications for Your RSS Feeds with Feed Notifier

    - by DigitalGeekery
    Are you looking for a way to get updates from your favorite websites right to your desktop?  If so, you’ll want to check out Feed Notifier. This free Windows application runs in the system tray and delivers pop-up notifications to your desktop when your subscribed RSS feeds are updated. Download and install Feed Notifier. (Download link below) When you are finished installing, the Feed Notifier Preferences window will open. Click on the Add… button to add an RSS feed. Copy and paste the Feed URL into the text box and click Next. Choose your polling interval. This is how often your feed will be checked for new items. You can set your polling interval for days, hours, minutes, or even seconds. Click FInish. At your configured interval, Feed Notifier will check your feeds for new items. If new items are present, they will pop up above your system tray.  You’ll get an intro portion of the article. Simply Click the headline in the feed pop up… …to open the full article in your default browser. Setting Preferences Open the preferences of Feed Notifier, by going to Start > All Programs > Feed Notifier, or right clicking on the system tray icon and selecting Preferences. On the Pop-ups tab you can configure the duration in seconds that each article stays displayed on your screen. The default is five seconds. You can also change the size of the display, the theme, and the amount of content displayed.   The Options tab offers additional configurations like article caching and using a proxy server. Filter tab allows you to filter in or out certain content. To add a filter click Add…   … then type in the filter rule. You can even choose to apply it to only certain feeds. Click OK. Feed Notifier will display on the filters tab the number of times the filter is applied. Click OK when finished.   You can scroll though the articles by using the forward and back buttons at the lower left, or use the play / pause buttons to move though the articles in a slideshow-type fashion.   Feed Notifier is nice way to get your updated feeds directly to your desktop in a timely fashion. It’s supports all RSS and Atom feeds and features a clean look and feel with plenty of customizable options. Download Feed Notifier Similar Articles Productive Geek Tips Make Outlook Stop Using Internet Explorer’s RSS FeedsChange Default Feed Reader in FirefoxView Feedburner Subscriber Numbers Even if FeedCount is Not DisplayedSubscribe to RSS Feeds in Chrome with a Single ClickOrganize your RSS Feeds with FeedDemon TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Web Application Integration Steps in OAM 11gR2 (High Level)

    - by Venkata Srikanth
    Install OAM, Webtier (OHS) and WebGate as per the standard installation steps. Create a WebGate instance (i.e deploy WebGate) A WebGate instance must be created that will copy required bits of agent from WEBGATE_HOME to WebGate instance location that shares the same INSTANCE_HOME with OHS ./deployWebGateInstance.sh –w /Oracle/Middleware/Oracle_WT1/instances/instance1/config/ohs1 –oh /Oracle/Middleware/Oracle_OAMWebGate1 Note: Here –w flag indicates OHS instance folder and –oh indicates the WebGate Oracle home Configure WebGate In the webgate configuration the EditHttpdConf utility will copy OUI instantiated apache_webgate.template from WEBGATE_HOME to webgate instance location (renamed to webgate.conf), and update httpd.conf with one additional line to include webgate.conf. export LD_LIBRARY_PATH=$ LD_LIBRARY_PATH:/Oracle/Middleware/Oracle_WT1/lib Navigate to /Oracle/Middleware/Oracle_OAMWebGate1/webgate/ohs/tools/setup/InstallTools ./EditHttpdConf –w /Oracle/Middleware/Oracle_WT1/instances/instace1/config/OHS/ohs1 –oh /Oracle/Middleware/Oracle_OAMWebGate1 –o webgate.conf Register WebGate Use RREG tool to register the OAM 11G WebGate Navigate to /Oracle/Middleware/Oracle_IDM1/oam/server/rreg/input Edit OAM11Grequest.xml. Change the specific xml content to include the weblogic admin URL, agentBaseURL, host identifier etc.. Navigate to /Oracle/Middleware/Oracle_IDM1/oam/server/rreg/bin Set permissions to oamreg.sh à chmod 777 oamreg.sh Edit oamreg.sh and set OAM_REG_HOME=/Oracle/Middleware/Oracle_IDM1/oam/server/rreg ./oamreg.sh inband input/OAM11Grequest.xml Enter the WebLogic admin credentials when prompted. After performing the above steps, there will be two artifcats created under Oracle/Middleware/Oracle_IDM1/oam/server/rreg/output, namely ObAccessClient.xml (Stroing webgate config parameters) and cwallet.sso (storing the agent key). These files must be copied to WebGate instance config folder (/Oracle/Middleware/Oracle_WT1/instances/instance1/config/ohs1/webgate/config) Restart OHS Deploy the web application (myApp) in WebLogic application server Proxy Configuration in OHS The mod_wl_ohs module enables requests to be proxied from Oracle HTTP Server 11g to Oracle WebLogic Server. Navigate to /Oracle/Middleware/Oracle_WT1/instances/instance1/config/OHS/ohs1 Edit mod_wl_ohs.conf file to include the following: <IfModule weblogic_module> WebLogicHost <WEBLOGIC_HOST> WebLogicPort <WEBLOGIC_PORT> # Debug ON # WLLogFile /tmp/weblogic.log MatchExpression *.jsp </IfModule> <Location /myApp> SetHandler weblogic-handler # PathTrim /weblogic # ErrorPage http:/WEBLOGIC_HOME:WEBLOGIC_PORT/ </Location> Note: Here WEBLOGIC_HOST and WEBLOGIC_PORT are the WebLogic admin server host and port respectively Restart OHS. Now if we access the web application URL with OHS host and port (Ex: http://OHS_HOST:<OHS_PORT>/myApp) so that the requests will be proxied to WebLogic server. Create a new application domain Login to OAM Admin Console Navigate to Shared Componentsà Authentication Schemesà Create Authentication Scheme (Ex: LDAP Auth Scheme. Here the scheme is assoicated with LDAP Authentication Module) Navigate to Policy Configuration à Application Domain à Create Application Domain Enter the Application Domain Name and Click Apply. Navigate to Resources tab and add the resource urls (Web Application URLs that needs to be protected) Navigate to Authentication Policy tab à Create a new authentication ploicy by providing the Resource URLs (The sample Web Application URLs) and Authentication Scheme. Navigate to Authorization Policy tab à Create a new authorization policy à Enter authorization policy name and navigate to Resource Tab à Attach the Reource URL, Host Identifiers here. Navigate to Conditions tab à Add the conditions like whom to allow and whom to deny access. Navigate to Rules tab à Crate the Allow Rule and Deny Rule with the available conditions from the previous step so that the Authorization Policy may authorize the logins. Navigate to Resources tab and attach the Authentication and Authorization plocies created in the above steps. Test the Web Application Integration.

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • A more elegant way of embedding a SOAP security header in Silverlight 4

    - by Your DisplayName here!
    The current situation with Silverlight is, that there is no support for the WCF federation binding. This means that all security token related interactions have to be done manually. Requesting the token from an STS is not really the bad part, sending it along with outgoing SOAP messages is what’s a little annoying. So far you had to wrap all calls on the channel in an OperationContextScope wrapping an IContextChannel. This “programming model” was a little disruptive (in addition to all the async stuff that you are forced to do). It seems that starting with SL4 there is more support for traditional WCF extensibility points – especially IEndpointBehavior, IClientMessageInspector. I never read somewhere that these are new features in SL4 – but I am pretty sure they did not exist in SL3. With the above mentioned interfaces at my disposal, I thought I have another go at embedding a security header – and yeah – I managed to make the code much prettier (and much less bizarre). Here’s the code for the behavior/inspector: public class IssuedTokenHeaderInspector : IClientMessageInspector {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderInspector(RequestSecurityTokenResponse rstr)     {         _rstr = rstr;     }       public void AfterReceiveReply(ref Message reply, object correlationState)     { }       public object BeforeSendRequest(ref Message request, IClientChannel channel)     {         request.Headers.Add(new IssuedTokenHeader(_rstr));                  return null;     } }   public class IssuedTokenHeaderBehavior : IEndpointBehavior {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderBehavior(RequestSecurityTokenResponse rstr)     {         if (rstr == null)         {             throw new ArgumentNullException();         }           _rstr = rstr;     }       public void ApplyClientBehavior(       ServiceEndpoint endpoint, ClientRuntime clientRuntime)     {         clientRuntime.MessageInspectors.Add(new IssuedTokenHeaderInspector(_rstr));     }       // rest omitted } This allows to set up a proxy with an issued token header and you don’t have to worry anymore with embedding the header manually with every call: var client = GetWSTrustClient();   var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Symmetric) {     AppliesTo = new EndpointAddress("https://rp/") };   client.IssueCompleted += (s, args) => {     _proxy = new StarterServiceContractClient();     _proxy.Endpoint.Behaviors.Add(new IssuedTokenHeaderBehavior(args.Result));   };   client.IssueAsync(rst); Since SL4 also support the IExtension<T> interface, you can also combine this with Nicholas Allen’s AutoHeaderExtension.

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • My shiny new gadget

    - by TechTwaddle
    About 3 months ago when I had tweeted (or twit?) that the HD7 could be my next phone I wasn’t a 100 percent sure, and when the HTC Mozart came out it was switch at first sight. I wanted to buy the Mozart mainly for three reasons; its unibody construction, smaller screen and the SLCD display. But now, holding a HD7 in my hand, I reminisce and think about how fate had its own plan. Too dramatic for a piece of gadget? Well, sort of, but seriously, this has been most exciting. So in short, I bought myself a HTC HD7 and am really loving it so far. Here are some pics (taken from my HD2 which now lies in a corner, crying),     Most of my day was spent setting up the device. Email accounts, Facebook, Marketplace etc. Since marketplace isn’t officially launched in India yet, my primary live id did not work. Whenever I tried launching marketplace it would say ‘marketplace is not currently supported in your country’. Searching the forums I found an easy work around. Just create a dummy live id with the country set to UK or US and log in to the device using this id. I was worried if the contacts and feeds from my primary live account would not be updated but that was not a problem. Adding another live account into the device does import your contacts, calendar and feeds from it. And that’s it, marketplace now works perfectly. I installed a few trial and free applications; haven’t checked if I can purchase apps though, will check that later and update this post. There is one issue I am still facing with the device, I can’t access the internet over GPRS. Windows Phone 7 only gives you the option to add an ‘APN’ and nothing else. Checking the connection settings on my HD2, I found out that there is also a proxy server I need to add to access GPRS, but so far I haven’t found a way to do that on WP7. Ideally HTC should have taken care of this, detect the operator and apply that operators settings on the device, but looks like that’s not happening. I also tried the ‘Connection Settings’ application that HTC bundled with the device, but it did nothing magical. If you’re reading this and know how to fix this problem please leave a comment. The next thing I did is install apps, a lot of apps. Read Engadget’s guide to essential apps for WP7. The apps and games I installed so far include Beezz (twitter app with push notifications), twitter (the official twitter app), Facebook, Youtube, NFS Undercover, Rocket Riot, Krashlander, Unite and the list goes on. All the apps run super smooth. The display looks fine indoors but I know it’s going to suck in bright sunlight. Anyhow, I am really impressed with what I’ve seen so far. I leave you with a few more photos. Have a great year ahead. Ciao!

    Read the article

  • How do you deal with poor management [closed]

    - by Sybiam
    I come from a company where during a project, we saw the client 3 time during the whole project. We were never informed when did the client came in office in order to discuss with him about his requirements. I did setup redmine and told them that if they have any request they can post an issue there. But they never really used redmine to publish anything. They would instead: harass a team member on the phone at any time of the day or night hand us over sheets of paper with new requests or changes hand us over new design (graphical) They requested how much time it would take us to finish the project, I gave them a date and a week to test everything and deployment. I calculated that time taking into account the current features we had to do. And then blamed us that our deadline was wrong and that we lied. But the truth is that one week before that deadline they added a couple of monster feature from nowhere and that week were we were supposed to test and deploy, my friends spent all day in the office changing all little things. After that project, my friend got some kind of depression and got scared everytime his phone rang. They kind of used him as a communication proxy. After that project of hell, (every body got pissed off on that project) as far as I know the designer who was working with us left work after that project and she had some kind of issue too with managers. My team also started looking for work somewhere else. At first I tried to get things straight with management, I tried to make a meeting to discuss about the communication issues and so on.. What really pissed me off and made me leave that job for good is the following. Me: "We have to discuss about what went wrong on the last project. It's quite important" Him: "Lets talk about it in a week or two. Just make a list of all the things you did wrong" Me: "We already have a new project and we want to prevent what happened on the last project to happen again" Him: "Just do it and well have our meeting in a week, make a list of all the thing you did wrong." It kind of ended there then he organized a meeting at a moment I wasn't unable to come. My friend discussed with him and tried to explained him that we really had to discuss about organization issue on how to manage a project. And his answer was pretty much: "During the meeting I don't want to ear how you want to us to manage a project but I want to know what you guys did wrong" After that I felt it wasn't even worth it discussing anything since they weren't even ready listening to us. Found a new job and I'm pretty happy with my choice. I'd like to know how you'd handle such situation. Is there anything to do to solve communication problem? After that project my friend got a depression and some other employee had their down too as far as I know. I wonder what else we can do other than leave these place as soon as possible. Feel sad for the people that are still there and get screamed at just because they need money in order to eat and finding an other job like that isn't that easy. note I died a little when our boss asked us to make a list of things we (programmers) did wrong. This is probably the stupidest request I ever got. If everybody thinks they did everything right, it doesn't mean that there is no problems. Individual problem are rarely the big issue. Colleagues help each others and solve theses issues to prevent problems.

    Read the article

  • Problem in udp socket programing in c

    - by Md. Talha
    I complile the following C code of UDP client after I run './udpclient localhost 9191' in terminal.I put "Enter Text= " as Hello, but it is showing error in sendto as below: Enter text: hello hello : error in sendto()guest-1SDRJ2@md-K42F:~/Desktop$ " Note: I open 1st the server port as below in other terminal ./server 9191. I beleive there is no error in server code. The udp client is not passing message to server. If I don't use thread , the message is passing .But I have to do it by thread. UDP client Code: /* simple UDP echo client */ #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <pthread.h> #define STRLEN 1024 static void *readdata(void *); static void *writedata(void *); int sockfd, n, slen; struct sockaddr_in servaddr; char sendline[STRLEN], recvline[STRLEN]; int main(int argc, char *argv[]) { pthread_t readid,writeid; struct sockaddr_in servaddr; struct hostent *h; if(argc != 3) { printf("Usage: %s <proxy server ip> <port>\n", argv[0]); exit(0); } /* create hostent structure from user entered host name*/ if ( (h = gethostbyname(argv[1])) == NULL) { printf("\n%s: error in gethostbyname()", argv[0]); exit(0); } /* create server address structure */ bzero(&servaddr, sizeof(servaddr)); /* initialize it */ servaddr.sin_family = AF_INET; memcpy((char *) &servaddr.sin_addr.s_addr, h->h_addr_list[0], h->h_length); servaddr.sin_port = htons(atoi(argv[2])); /* get the port number from argv[2]*/ /* create a UDP socket: SOCK_DGRAM */ if ( (sockfd = socket(AF_INET,SOCK_DGRAM, 0)) < 0) { printf("\n%s: error in socket()", argv[0]); exit(0); } pthread_create(&readid,NULL,&readdata,NULL); pthread_create(&writeid,NULL,&writedata,NULL); while(1) { }; close(sockfd); } static void * writedata(void *arg) { /* get user input */ printf("\nEnter text: "); do { if (fgets(sendline, STRLEN, stdin) == NULL) { printf("\n%s: error in fgets()"); exit(0); } /* send a text */ if (sendto(sockfd, sendline, sizeof(sendline), 0, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) { printf("\n%s: error in sendto()"); exit(0); } }while(1); } static void * readdata(void *arg) { /* wait for echo */ slen = sizeof(servaddr); if ( (n = recvfrom(sockfd, recvline, STRLEN, 0, (struct sockaddr *) &servaddr, &slen)) < 0) { printf("\n%s: error in recvfrom()"); exit(0); } /* null terminate the string */ recvline[n] = 0; fputs(recvline, stdout); }

    Read the article

  • c# WinForms ReportViewer Performance issue using RefreshReport() and ServerReport.SetParameters()

    - by mdk
    Hi All, Currently I am writing a c# client application that uses the WinForms ReportViewer Control to display reports from a remote server. I am having performance troubles with the ReportViewer Control, to be specific with the 2 methods reportViewer.ServerReport.SetParameters() and reportViewer.RefreshReport() – they both take a really long time to complete and not just on the very first call, but on each subsequent call as well. SetParameters() takes 20 to 40 seconds (they vary greatly in time, some execute event okay fast) and RefreshReport() is a bit faster but still takes ages. I don’t think the server is the culprit, as the same report viewed using the browser renders pretty fast, about a second tops. The report in question doesn't matter as well. When I break into the process and take a look at the call stack, I see a call to Socket.DoConnect. So I thought that’s a good reason to start using fiddler and I installed it, disabled caching and fired up the app again to see which call takes that long to connect, but the performance issue was gone. By using a proxy I am having the same performance as the webbrowser. FYI: I am using NTLM authentication in the following way: reportViewer.ServerReport.ReportServerCredentials.NetworkCredentials = new NetworkCredentials() { Username = ... } I don’t have a strong webbackground, so I guess my question is: What should this tell me / What should I be looking into? (Btw: Adding fiddler to my installation package is not the solution I am looking for :)) I am grateful for any pointers. Take care, -Martin

    Read the article

  • Maven error: Unable to get resource / Server redirected too many times

    - by tewe
    Our proxy went down and I tried to update dependencies with maven while it was off. Since then I can't download anything with maven. I get this error for everything. I tried -U option, deleting my local repository and tried different maven version (2.0.9, 2.2.1) but it doesn't work. Any idea how to solve this? Earlier it also said 'repository will be blacklisted' to all of them. Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-compiler-plugin/2.1/maven-compiler-plugin-2.1.pom [WARNING] Unable to get resource 'org.apache.maven.plugins:maven-compiler-plugin:pom:2.1' from repository central (http://repo1.maven.org/maven2): Error transferring file: Server redirected too many times (20) org.apache.maven.plugins:maven-compiler-plugin:pom:2.1 from the specified remote repositories: jboss-snapshot (http://snapshots.jboss.org/maven2), central (http://repo1.maven.org/maven2), JBoss Repo (http://repository.jboss.com/maven2), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.com/maven/bundles/external), com.springsource.repository.bundles.snapshot (http://repository.springsource.com/maven/bundles/snapshot), jboss (http://repository.jboss.com/maven2), com.springsource.repository.bundles.release (http://repository.springsource.com/maven/bundles/release), jboss-snapshot-plugins (http://snapshots.jboss.org/maven2), com.springsource.repository.bundles.milestone (http://repository.springsource.com/maven/bundles/milestone), jboss-plugins (http://repository.jboss.com/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 25 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 27 more

    Read the article

  • Error message: "Two different contracts have the same ConfigurationName" when downloading wsdl from

    - by rwwilden
    I get the following error message when I try to use svcutil to generate a client proxy for a xamlx file that is hosted by AppFabric beta 2: Two different contracts have the same ConfigurationName I understand the message, however, I cannot find its cause or how to fix it. I'm following the 'Introduction to Workflow Services' lab from the VS2010RC training kit. The web application has two services: SubmitApplication.xamlx and EducationScreening.xamlx. I'm not sure why but both of them have four endpoints. If I take a look via the AppFabric Dashboard in IIS Mgmt Studio: basicHttpBinding (Contract: *) (Type: Application(Default)) netNamedPipeBinding (Contract: System.ServiceModel.Activities.IWorkflowInstanceManagement) (Type: System (workflowControlEndpoint)) netNamedPipeBinding (Contract: *) (Type: Application (Default)) serviceMetadataHttpGetBinding (Contract: serviceMetadataHttpGetContract) (Type: System (serviceMetadataEndpoint)) When taking a look at the SubmitApplication.xamlx in a browser, I see the following stacktrace: [InvalidOperationException: Two different contracts have the same ConfigurationName.] System.ServiceModel.Activities.WorkflowServiceHost.CreateDescription(IDictionary`2& implementedContracts) +361 System.ServiceModel.ServiceHostBase.InitializeDescription(UriSchemeKeyedCollection baseAddresses) +174 System.ServiceModel.Activities.WorkflowServiceHost.InitializeDescription(WorkflowService serviceDefinition, UriSchemeKeyedCollection baseAddresses) +82 System.ServiceModel.Activities.WorkflowServiceHost.InitializeFromConstructor(WorkflowService serviceDefinition, Uri[] baseAddresses) +206 System.ServiceModel.Activities.Activation.WorkflowServiceHostFactory.CreateWorkflowServiceHost(WorkflowService service, Uri[] baseAddresses) +43 System.ServiceModel.Activities.Activation.WorkflowServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) +974 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +1423 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +50 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +1132 [ServiceActivationException: The service '/HRApplicationServices/SubmitApplication.xamlx' cannot be activated due to an exception during compilation. The exception message is: Two different contracts have the same ConfigurationName..] System.Runtime.AsyncResult.End(IAsyncResult result) +889824 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +179150 System.Web.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) +107 Can anyone tell me what I'm doing wrong? I haven't configured any of the bindings myself. The BasicHttpBinding is what you get by default in .NET 4 when hosting a service inside a web application. The other bindings are configured by AppFabric. I can't find their configuration anywhere. Kind regards, Ronald Wildenberg

    Read the article

  • Most useful free .NET libraries?

    - by Binoj Antony
    I have used a lot of free .NET libraries, some from Microsoft itself! Which ones have you found the most useful? Dependency Injection/Inversion of Control Unity Framework - Microsoft StructureMap - Jeremy Miller Castle Windsor NInject Spring Framework Autofac Managed Extensibility Framework Logging Logging Application Block - Microsoft Log4Net - Apache Error Logging Modules and Handlers(ELMAH) NLog Compression SharpZipLib DotNetZip YUI Compressor (CSS and JS compression/minification) AjaxMinifier (in other downloads) (JS compression. Also includes MSBuild task) Ajax Ajax Control Toolkit - Microsoft AJAXNet Pro Data Mapper XmlDataMapper AutoMapper ORM NHibernate Castle ActiveRecord Subsonic XmlDataMapper Charting/Graphics Microsoft Chart Controls for ASP.NET 3.5 SP1 Microsoft Chart Controls for Winforms ZedGraph Charting NPlot - Charting for ASP.NET and WinForms PDF Creators/Generators PDFsharp iTextSharp Unit Testing/Mocking NUnit Rhino Mocks Moq TypeMock.Net xUnit.net mbUnit Machine.Specifications Automated Web Testing Selenium Watin URL Rewriting url rewriter UrlRewriting.Net Url Rewriter and Reverse Proxy - Managed Fusion Controls Krypton - Free winform controls Source Grid - A Grid control Devexpress - free controls Unclassified CSLA Framework - Business Objects Framework AForge.net - AI, computer vision, genetic algorithms, machine learning Enterprise Library 4.1 - Logging, Exception Management, Validation, Policy Injection File helpers library C5 Collections - Collections for .NET Quartz.NET - Enterprise Job Scheduler for .NET Platform MiscUtil - Utilities by Jon Skeet Lucene.net - Text indexing and searching Json.NET - Linq over JSON Flee - expression evaluator PostSharp - AOP IKVM - brings the extensive world of Java libraries to .NET. Title of the question taken from here. [EDIT] Please provide links to these free libraries as well. Once we have a huge list of this, it can be arranged in categories! Please do not mention .NET Applications/EXEs here.

    Read the article

  • Unable to debug WCF service in VS2008 after UserNamePasswordValidator fault

    - by lsb
    Hi! I have a WCF service that I secure with a custom UserNamePasswordValidator and Message security running over wsHttpBinding. The release code works great. Unfortunately, when I try to run in debug mode after having previously used invalid credentials (the current credentials ARE valid!) VS2008 displays an annoying dialog box (more on this below). A simplified version of my Validate method from the validator might look like the following: public override void Validate(string userName, string password) { if (password != "ABC123") throw new FaultException("The password is invalid!"); } The client receives a MessageSecurityException with InnerException set to the FaultException I explictly threw. This is workable since my client can display the message text of the original FaultException I wanted the user to see. Unfortunately, in all subsequent service calls VS2008 displays an "Unable to automatically debug..." dialog. The only way I can stop this from happening is to exit VS2008, get back in and connect to my service using correct credentials. I should also add that this occurs even when I create a brand new proxy on each and every call. There's no chance MY channel is faulted when I make a call. Its likely, however, that VS2008 hangs on to the previously faulted channel and tries to use it for debugging purposes. Needless to say, this sucks! The entire reason I'm entering "bad" credentials is to test the "bad-credential" handling. Anyway, if anyone has any ideas as to how I can get around this bug (?!?) I'd be very very appreciative....

    Read the article

  • Snow Leopard & Ruby on Rails - SQLite3 issue

    - by spin-docta
    I just upgraded to snow leopard. Before, I had everything running fine, but now when I start the server from the terminal I get: => Booting WEBrick => Rails 2.3.3 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2009-08-28 23:18:19] INFO WEBrick 1.3.1 [2009-08-28 23:18:19] INFO ruby 1.8.7 (2008-08-11) [universal-darwin10.0] [2009-08-28 23:18:19] INFO WEBrick::HTTPServer#start: pid=845 port=3000 Then when I got to generated page, it seems like it isn't working with sqlite3. How do I fix? Here's what the server prints out when I go to a scripted view page: /!\ FAILSAFE /!\ Fri Aug 28 23:18:34 -0400 2009 Status: 500 Internal Server Error uninitialized constant SQLite3::Driver::Native::Driver::API /Library/Ruby/Gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:105:in `const_missing' /Library/Ruby/Gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/driver/native/driver.rb:76:in `open' /Library/Ruby/Gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/database.rb:76:in `initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/query_cache.rb:9:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/query_cache.rb:28:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/head.rb:9:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/methodoverride.rb:24:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/params_parser.rb:15:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/session/cookie_store.rb:93:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/reloader.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/failsafe.rb:26:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/lock.rb:11:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/lock.rb:11:in `synchronize' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/lock.rb:11:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/dispatcher.rb:106:in `call' /Library/Ruby/Gems/1.8/gems/rails-2.3.3/lib/rails/rack/static.rb:31:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/urlmap.rb:46:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/urlmap.rb:40:in `each' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/urlmap.rb:40:in `call' /Library/Ruby/Gems/1.8/gems/rails-2.3.3/lib/rails/rack/log_tailer.rb:17:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/content_length.rb:13:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler/webrick.rb:46:in `service' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:162:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:95:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:92:in `each' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:92:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:23:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:82:in `start' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler/webrick.rb:13:in `run' /Library/Ruby/Gems/1.8/gems/rails-2.3.3/lib/commands/server.rb:111 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

  • Using IIS Application Request Routing (ARR) for ASP.NET MVC

    - by Malcolm Frexner
    I use a simple ASP.NET MVC web (the template you use when you create a new site) and the web works as expected in my live environment. I now try to use IIS Application Request Routing version 2. I have a rule that send all reuqests to a different server that match a rule. The settings are a bit like this: http://blogs.iis.net/wonyoo/archive/2008/07/09/application-request-routing-arr-as-a-reverse-proxy.aspx My rule is just a bit different it is /shop(.*). Only requests that contain shop are send to a different server. I have to use rewrite, not redirect (The same as in the Picture) This works as long as the web the original requests go to is no ASP.NET MVC web. I tried to use a plain htm file in the webfolder and it worked. If put a compiled ASP.NET application into the webfolder it worked. But as soon as I put an ASP.NET MVC web into the folder, request arr served by this application. My understanding is that the ARR should kick in before the web application gets the chance to handle the request. Did anybody use ARR sucessfully as a reverse proxy for a ASP.NET MVC web? EDIT Here is the resulting web config when the rewrite roule is entered. With this rule I get a 404 that indicates that the rule is not used. <?xml version="1.0" encoding="UTF-8"?> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings /> <connectionStrings> <add name="ApplicationServices" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> <membership> <providers> <clear /> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/" /> </providers> </membership> <profile> <providers> <clear /> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" applicationName="/" /> </providers> </profile> <roleManager enabled="false"> <providers> <clear /> <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </roleManager> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Collections.Generic" /> </namespaces> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> <add verb="*" path="*.mvc" validate="false" type="System.Web.Mvc.MvcHttpHandler, System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="WarnAsError" value="false" /> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="OptionInfer" value="true" /> <providerOption name="WarnAsError" value="false" /> </compiler> </compilers> </system.codedom> <system.web.extensions /> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <rewrite> <rules> <rule name="shop" stopProcessing="true"> <match url="^shop/([_0-9a-z-.]+)" /> <action type="Rewrite" url="article.aspx?title={R:1}" logRewrittenUrl="true" /> </rule> </rules> </rewrite> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="ScriptModule" /> <remove name="UrlRoutingModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <remove name="MvcHttpHandler" /> <remove name="UrlRoutingHandler" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="MvcHttpHandler" preCondition="integratedMode" verb="*" path="*.mvc" type="System.Web.Mvc.MvcHttpHandler, System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </handlers> </system.webServer> </configuration>

    Read the article

  • NHibernate.Bytecode.UnableToLoadProxyFactoryFactoryException

    - by Shane
    I have the following code set up in my Startup IDictionary properties = new Dictionary(); properties.Add("connection.driver_class", "NHibernate.Driver.SqlClientDriver"); properties.Add("dialect", "NHibernate.Dialect.MsSql2005Dialect"); properties.Add("proxyfactory.factory_class", "NNHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle"); properties.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); properties.Add("connection.connection_string", "Data Source=ZEUS;Initial Catalog=mydb;Persist Security Info=True;User ID=sa;Password=xxxxxxxx"); InPlaceConfigurationSource source = new InPlaceConfigurationSource(); source.Add(typeof(ActiveRecordBase), (IDictionary<string, string>) properties); Assembly asm = Assembly.Load("Repository"); Castle.ActiveRecord.ActiveRecordStarter.Initialize(asm, source); I am getting the following error: failed: NHibernate.Bytecode.UnableToLoadProxyFactoryFactoryException : Unable to load type 'NNHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle' during configuration of proxy factory class. Possible causes are: - The NHibernate.Bytecode provider assembly was not deployed. - The typeName used to initialize the 'proxyfactory.factory_class' property of the session-factory section is not well formed. I have read and read I am referecning the All the assemblies listed and I am at a total loss as what to try next. Castle.ActiveRecord.dll Castle.DynamicProxy2.dll Iesi.Collections.dll log4net.dll NHibernate.dll NHibernate.ByteCode.Castle.dll I am 100% sure the assembly is in the bin. Anyone have any ideas?

    Read the article

  • Automapper -cannot resolve the Generic List

    - by chugh97
    Mapper.CreateMap<BusinessObject, Proxy.DataContacts.DCObject>() .ForMember(x => x.ExtensionData, y => y.Ignore()) .ForMember(z => z.ValidPlaces, a=> a.ResolveUsing(typeof(ValidPlaces))); Mapper.AssertConfigurationIsValid(); public class BusinessObject { public Enum1 Enum1 { get; set; } public List<ValidPlaces> ValidPlaces{ get; set; } } public class ValidPlaces { public int No { get; set; } public string Name { get; set; } } public class DCObject { [DataMember] public Enum1 Enum1 { get; set; } [DataMember] public List<ValidPlaces> ValidPlaces{ get; set; } } Mapper.CreateMap works find when Mapper.AssertConfigurationIsValid(); is called but when I actually call into the service the Automapper throws and excetion saying ValidPlaces could not be mapped.Works fine if I put Ignore() but ideally want that passed.Any AutoMapper experts out there pls help.

    Read the article

  • Autofac WCF integration + sessions

    - by Michael Sagalovich
    I am having an ASP.NET MVC 3 application that collaborates with a WCF service, which is hosted using Autofac host factory. Here are some code samples: .svc file: <%@ ServiceHost Language="C#" Debug="true" Service="MyNamespace.IMyContract, MyAssembly" Factory="Autofac.Integration.Wcf.AutofacServiceHostFactory, Autofac.Integration.Wcf" %> Global.asax of the WCF service project: protected void Application_Start(object sender, EventArgs e) { ContainerBuilder builder = new ContainerBuilder(); //Here I perform all registrations, including implementation of IMyContract AutofacServiceHostFactory.Container = builder.Build(); } Client proxy class constructor (MVC side): ContainerBuilder builder = new ContainerBuilder(); builder.Register(c => new ChannelFactory<IMyContract>( new BasicHttpBinding(), new EndpointAddress(Settings.Default.Url_MyService))) .SingleInstance(); builder.Register(c => c.Resolve<ChannelFactory<IMyContract>>().CreateChannel()) .UseWcfSafeRelease(); _container = builder.Build(); This works fine until I want WCF service to allow or require sessions ([ServiceContract(SessionMode = SessionMode.Allowed)], or [ServiceContract(SessionMode = SessionMode.Required)]) and to share one session with the MVC side. I changed the binding to WSHttpBinding on the MVC side, but I am having different exceptions depending on how I tune it. I also tried changing AutofacServiceHostFactory to AutofacWebServiceHostFactory, with no result. I am not using config file as I am mainly experimenting, not developing real-life application, but I need to study the case. But if you think I can achieve what I need only with config files, then OK, I'll use them. I will provide exception details for each combination of settings if required, I'm omitting them not to make the post too large. Any ideas on what I can do?

    Read the article

  • Jquery.get() not returning any data

    - by David Waters
    Hi I am trying to scrape other people web pages (for the forces of good not evil). I am currently trying to do this with javascript/jquery from with in a browser. I am finding that the no data is returned from the jquery.get() success call back function. My code. $.get('http://www.google.co.uk/', function (data, textStatus, XMLHttpRequest){ alert("status " + textStatus); alert('data:' + data); window.data=data; window.textStatus=textStatus; window.httpReq = XMLHttpRequest}); In my mind this should simply do a get on google store the data in window.data and we are all good. What happens is we get textStatus == success and data == "". the status on the XMLHttpRequest is 4(success). I have looked at the network traffic using a transparent proxy (Charles) and everything looks find there http status 200 plenty of data being returned. I am running this just from the Firebug console in Firefox. Any ideas?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >