Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 922/1640 | < Previous Page | 918 919 920 921 922 923 924 925 926 927 928 929  | Next Page >

  • Why are 32-bit application pools more efficient in IIS? [closed]

    - by mhenry1384
    I've been running load tests with two different ASP.NET web applications in IIS. The tests are run with 5,10,25, and 250 user agents. Tested on a box with 8 GB RAM, Windows 7 Ultimate x64. The same box running both IIS and the load test project. I did many runs, and the data is very consistent. For every load, I see a lower "Avg. Page Time (sec)" and a lower "Avg. Response Time (sec)" if I have "Enable 32-bit Applications" set to True in the Application Pools. The difference gets more pronounced the higher the load. At very high loads, the web applications start to throw errors (503) if the application pools are 64-bit, but they can can keep up if set to 32-bit. Why are 32-bit app pools so much more efficient? Why isn't the default for application pools 32-bit?

    Read the article

  • TFS Build Server not finding 'Microsoft.Expression.Interactions&rsquo;

    - by Chris Skardon
    We’ve been trying to get the build server to pick up the Microsoft.Expression.Interactions.dll needed so we can use things like the ExtendedVisualStateManager and the DataStateBehavior. Adding the DataStateBehavior in Blend adds the reference to the project, so it all compiles fine on the local machine. Checking in the code into a CI server throws up some ugliness: d:\Builds\6\Source\MyFile.xaml (290): The tag 'DataStateBehavior' does not exist in XML namespace 'http://schemas.microsoft.com/expression/2010/interactions'. Errr, it should do…?? The reference is there, so… ahhhhh! A quick check of the properties and we’re using the dll from the c:\program files (x86)\ location, no wonder the build server can’t find it – let’s add the dll into our (ever expanding) ‘lib’ folder, reference that version, and check that bad boy in… No. Still No. Still get the same error. What the??? The reference is still pointing to the program files location?? Ok, so let’s modify the csproj file using the wonderful notepad, changing the reference from <Reference Include="Microsoft.Expression.Interactions, /* loadsa shizzle here */ /> to <Reference Include="Microsoft.Expression.Interactions">     <HintPath>..\Lib\Microsoft.Expression.Interactions.dll</HintPath> </Reference> Check that in, aaaand… Success!!! The reason for this (from what I can gather from here) is that we’re using the Productivity Power Tools, and they try to be clever about referencing dlls, and changed what we’d asked (i.e. for the local version) to use the original program files location.. Editing the file in notepad (sweet sweet notepad) gets around this issue… Irritating, took a while to figure this out… Meh :)

    Read the article

  • Unable to authenticate Windows XP clients agains Snow Leopard Server PDC after 10.6.2 upgrade

    - by Roland
    I have setup a Snow Leopard Server 10.6.1 as a PDC without problems to authenticate Windows XP clients. Joining a Windows XP client to the SLS PDC Domain and log in from a Windows XP client to the SLS PDC Domain are working. After the update to Snow Leopard Server 10.6.2 the authentication is broken. opendirectory_smb_pwd_check_ntlmv1 gave -14090 [eDSAuthFailed] By changing the Windows XP "Network security: LAN Manager authentication level" policy to NTVLM2 responses only the authentication agains a SMB share is possible, but trying to join SLS PDC Domain is still not possible. opendirectory_smb_pwd_check_ntlmv2 gave -14090 [eDSAuthFailed] Any ideas? Is anyone else having similar authentication difficulties?

    Read the article

  • Switching glassfish 2.x to 3.1.1

    - by Alexandre Abreu
    So I am managing a old java project that is using glassfish 2.x. Seems like netbeans does not support 2.x versions any more, so I want to change it to 3.1.1. How to properly make that change? I have JDK 1.6 installed. When I try to select the 3.1.1 does not fix the error "Unable to find Application Server J2EE" Thanks in advance. Sorry if this question is in the wrong place, this is really not my area.

    Read the article

  • Setting up Lan within a Lan

    - by nageeb
    How unreasonable would it be to setup a small LAN within an existing LAN? I'm setting up a series of video surveillance servers and a number of IP cameras in a client's location and cannot have my equipment on the same network as their local machines. My network is essentially self-contained and the only device that anyone needs to access is a web-app on one of the machines. Basically I'm thinking of installing a SOHO router which would uplink to their LAN, and then set up some NAT rules on both their router and my router, to allow outside access to the webserver. Is there anything fundamental that i'm missing which would prevent this from working?

    Read the article

  • Thunderbird compact is taking forever

    - by mulllhausen
    One day I came in to work and found that our development server - a Ubuntu box had a full hard disk. I did a bit of investigation using the du command and it seems like mozilla thunderbird is the major culprit. After burning off some backups, the disk was left at 94%: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 895G 791G 59G 94% / none 4.0G 300K 4.0G 1% /dev none 4.0G 1.4M 4.0G 1% /dev/shm none 4.0G 140K 4.0G 1% /var/run none 4.0G 0 4.0G 0% /var/lock none 4.0G 0 4.0G 0% /lib/init/rw $ cd $ du -ch | grep [0-9]G 666G ./.thunderbird/ccsmcruu.default/ImapMail/mail.adofms.com.au 666G ./.thunderbird/ccsmcruu.default/ImapMail 667G ./.thunderbird/ccsmcruu.default 667G ./.thunderbird 2.2G ./.VirtualBox/Machines/iBike/Snapshots 2.2G ./.VirtualBox/Machines/iBike 2.2G ./.VirtualBox/Machines 2.2G ./.VirtualBox 670G . 670G total I did some reading and found that Mozilla Thunderbird does not compact files by default - i.e. all of the old emails that were sent to trash are still kept. One of the mailboxes used to get a lot of spam so I guess this accounts for the 667GB. I opened up Thunderbird to see how much space the inbox actually takes up and it turns out to be approximately 500MB - over 1000 times less than the stuff that has not been compacted over the years. So i right clicked on the inbox directory in the tree on the left of Thunderbird and selected 'compact'. I left it for about 12hours but even after that it still said 'compacting folder' on the status bar. I don't use Thunderbird on this PC - it belonged to a colleague who has left the company, however I do occasionally need to look through the inbox for references to the project I am working on, so deleting all traces of Thunderbird is not an option. My question is - is there any way I can monitor the progress of Thunderbird's compacting function? I would really like to know how long it is going to take. Also is there any way I can speed up the compacting process?

    Read the article

  • DD-WRT Router acting as a Switch and Wi-Fi AP

    - by dotnetchris
    Recently I upgraded to Comcast Business and their modem has a built-in router so I took off my DD-WRT router and moved to a location in my home where I needed more ports and Wi-Fi (preferably without running new Ethernet cables since it's 50-100 foot run through floor and ceilings). I have my network cable from my Comcast modem going into Port 1 of my DD-WRT router with Port 2 and 3 being networked PCs. I have DD-WRT setup as a DHCP forwarder with the firewall disabled. This lets my all of my devices access the network fine. Is the correct the way to do this? Or is this a less optimal solution and it should be done in a slightly different way?

    Read the article

  • Heroku SSL "certificate is only valid for the following names: *.herokuapp.com, herokuapp.com"

    - by benedict_w
    I'm trying to setup a Geotrust SSL certificate for my Heroku app using the SSL Endpoint addon and the instructions at https://devcenter.heroku.com/articles/ssl-endpoint. I generated my public key from my private key using: openssl rsa -in server.orig.key -out server.key and added to the heroku certs: heroku certs:add server.crt server.key Everything seemed to be fine. heroku certs listed the corrected information only with Trusted = false for my certificate. If I go to https://tokyo-2121.herokussl.com the browser says: You attempted to reach tokyo-2121.herokussl.com, but instead you actually reached a server identifying itself as www.mydomain.com. As expected with the certificate apparently identifying the correct domain, but When I set up the CNAME to the given tokyo-2121.herokussl.com and visit my subdomain the browser says: www.mydomain.com uses an invalid security certificate. The certificate is only valid for the following names: *.herokuapp.com , herokuapp.com If I run curl -kv https://www.mydomain.com I get: subjectAltName does not match www.mydomain.com

    Read the article

  • Find which config file Apache uses to find virtual host

    - by eje211
    I have a rather complicated Apache setup: some config files have to be manually written and some have to be automatically generated. So far, I've had no problems. But now, I've just tried to set a site at: newsubdomain.mydomain.com and Apache directs it to the data for: www.someotherdomain.com I've looking into all the many config files and can't find anything wrong. All is pointing where it should and no definitions, as far as I can tell, overlap. So what I'm wondering is: is there a way to see which config files or config instructions Apache is using to match a URL to a directory? Tanks.

    Read the article

  • Setting write permissions for folders while creating a package with MSDeploy

    - by bala_88
    I'm using MSDeploy to create an artefact as a build step in NAnt. This particular build step is called on successful compilation. The artefact is then used to for deployment. Here is the step specified in my build file. <target name="BuildMsDeployPackage" depends="StageForMsDeployPackaging"> <exec program="${msdeploy.exe}" workingdir="${buildDirectory}" verbose="true" commandline="-verb:sync -source:iisapp=${packagingDirectory} -dest:package=${publishDirectory}\${webapp.artifact.zip}"/> The source here is my my web project. I want to be able to set specify write access to a couple of folders in the package that is created. Is this possible? I know that there is a setAcl provider for this specific purpose, but can this be used while creating a package?

    Read the article

  • How can I connect any USB device to my computer wirelessly?

    - by daviesgeek
    I am trying to connect the Canon 550D wirelessly to my computer so that I can control it from at a distance. It can connect normally via a USB cable, so I was wondering if there was a way to plug a dongle into the computer, and a dongle into the camera so the two can connect. I'm basically looking for a really long USB cord that will work over long distances (200+ feet). Another important factor is that I really would prefer not having use a power cord, rather just plugging in a dongle, as it will make the setup more mobile. Does anything like this exist? (I know that Cables Unlimited had something like what I wanted, but they went out of business last year)

    Read the article

  • SFTP jail & Keeping file ownership the same / File owner per folder

    - by Dragonshadow
    I want to setup a jailed SFTP account for a subfolder of another user's home folder, but want the owner of everything in that subfolder to stay the same, including new files and folders uploaded and created by the sftp user, while still allowing access to the files and folders of that subfolder as if the SFTP user was the parent user. rawny bawb-sftp /home/rawny <- rawny owns this /home/rawny/sftp <- rawny owns this too, but bawb-sftp can upload to it, edit files, etc bawb-sftp uploads a file /home/rawny/sftp/lol.txt rawny should still own the file, as if he made it in the first place, even though bawb-sftp was the one that uploaded it. Basically I guess I'm asking for an sftp jail that acts as a highly limited passthrough/puppet for another user?

    Read the article

  • links for 2010-04-09

    - by Bob Rhubart
    Brian Dayton: My Doors - Why Standards Matter to Business "My 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different." -- Brian Dayton (tags: oracle otn architecture businessalignment standards) Edwin Biemond: ADF Task Flow interaction with WebCenter Composer Oracle ACE Edwin Biemond of Whitehorses describes how to manage independent task flows at runtime with Oracle WebCenter Composer. (tags: otn oracle oracleace webcenter enterprise2.0) John Mead: Exadata in Retail Presentation Rittman Mead's John Mead shares slides describing a recent project: a custom data warehouse built on Exadata, populated by CDC with reporting delivered by OBIEE. (tags: oracle otn rittmanmead datawarehousing exadata obiee cdc) Where's The Line Between Architecting And Engineering? | Forrester Blogs Forrester's Gene Leganza answers the question "What is the difference between architecting and designing or, alternately, between architecture and engineering?" (tags: architecture engineering forrester)

    Read the article

  • Unix LVM: how to resize root lvm

    - by Hussein Sabbagh
    I took over a virtual server at work after a co-worker left. He, however, setup the server incorrectly at multiple stages and im cleaning them up as I run into them... Currently I realized that the file system is broken in half onto 2 logical volumes both at 50gb. One is mounted as the root directory and the other as the /home directory. Saddly, the server has taken up 46gb of the root lv and i need to expand it. I have already shrunk and remounted the home lv. I resized the root lv, but I can't figure out how to unmount the root directory while the computer is on. Obviously this needs to be done before I can finalize the expansion, but I don't know how. I'd appreciate any help or pointing in the right direction. Thanks in advance. PS this is on a CentOS server.

    Read the article

  • Configuring RAID1 on HP Proliant Microserver N54L / Ubuntu 14.04.1 LTS [duplicate]

    - by Chris Beach
    This question already has an answer here: Cant find my harddrives in ubuntu installation? 2 answers I've bought a N54L and fitted two 3GB drives. Keen to get set up with RAID1. BIOS: SATA controller mode set to "RAID" RAID Option ROM utility: both physical drives set up as one logical drive When I came to install Ubuntu (14.04.1), both drives appeared during the setup process. I was only expecting to see the logical drive, although I'm a complete novice with RAID. I've read that the HP Proliant Microservers don't have "proper" RAID support, and require some kind of driver to be installed. I've tried a few HP utilities from the following apt repo: deb http://downloads.linux.hp.com/SDR/repo/mcp wheezy/current non-free On installation, most say "server not supported" Would appreciate your advice.

    Read the article

  • GnuPG PHP gnupg Folder & Files Permission

    - by Michael Robinson
    Situation: we plan on using PHP's GnuPG extension to encrypt/decrypt files. Currently we've setup some test cases, using keys generated with GPG. The generated files reside in: /Users/username/.gnupg/ I am able to get keyinfo for the key I want to use to encrypt/decrypt, but when I attempt to use addencryptkey, I get: (E_WARNING: 2): gnupg::addencryptkey() [gnupg.addencryptkey]: get_key failed I think this is due to the permissions on the ~/.gnupg folder & enclosed files. The files are owned by me - username, but apache runs as www. A few days ago I did have this working, but it seems each time I use GPG Keychain Access to import / export a key, the folder's permissions are changed. Question: What are the exact permissions required to allow PHP's GnuPG to add encrypt & decrypt keys?

    Read the article

  • Summary of Oracle E-Business Suite Technology Webcasts and Training

    - by BillSawyer
    Last Updated: November 16, 2011We're glad to hear that you've been finding our ATG Live Webcast series to be useful.  If you missed a webcast, you can download the presentation materials and listen to the recordings below. We're collecting other learning-related materials right now.  We'll update this summary with pointers to new training resources on an ongoing basis.  ATG Live Webcast Replays All of the ATG Live Webcasts are hosted by the Oracle University Knowledge Center.  In order to access the replays, you will need a free Oracle.com account. You can register for an Oracle.com account here.If you are a first-time OUKC user, you will have to accept the Terms of Use. Sign-in with your Oracle.com account, or if you don't already have one, use the link provided on the sign-in screen to create an account. After signing in, accept the Terms of Use. Upon completion of these steps, you will be directed to the replay. You only need to accept the Terms of Use once. Your acceptance will be noted on your account for all future OUKC replays and event registrations. 1. E-Business Suite R12 Oracle Application Framework (OAF) Rich User Interface Enhancements (Presentation) Prabodh Ambale (Senior Manager, ATG Development) and Gustavo Jiminez (Development Manager, ATG Development) offer a comprehensive review of the latest user interface enhancements and updates to OA Framework in EBS 12.  The webcast provides a detailed look at new features designed to enhance usability, including new capabilities for personalization and extensions, and features that support the use of dashboards and web services. (January 2011) 2. E-Business Suite R12 Service Oriented Architectures (SOA) Using the E-Business Suite Adapter (Presentation, Viewlet) Neeraj Chauhan (Product Manager, ATG Development) reviews the Service Oriented Architecture (SOA) capabilities within E-Business Suite 12, focussing on using the E-Business Suite Adapter to integrate EBS with third-party applications via web services, and orchestrate services and distributed transactions across disparate applications. (February 2011) 3. Deploying Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications Ivo Dujmovic (Director, ATG Development) reviews the latest capabilities for using Oracle VM to deploy virtualized EBS database and application tier instances using prebuilt EBS templates, wire those virtualized instances together using the EBS virtualization kit, and take advantage of live migration of user sessions between failing application tier nodes.  (February 2011) 4. How to Reduce Total Cost of Ownership (TCO) Using Oracle E-Business Suite Management Packs (Presentation) Angelo Rosado (Product Manager, ATG Development) provides an overview of how EBS sysadmins can make their lives easier with the Management Packs for Oracle E-Business Suite Release 12.  This session highlights key features in Application Management Pack (AMP) and Application Change Management Pack) that can automate or streamline system configurations, monitor EBS performance and uptime, keep multiple EBS environments in sync with patches and configurations, and create patches for your own EBS customizations and apply them with Oracle's own patching tools.  (June 2011) 5. Upgrading E-Business Suite 11i Customizations to R12 (Presentation) Sara Woodhull (Principal Product Manager, ATG Development) provides an overview of how E-Business Suite developers can manage and upgrade existing EBS 11i customizations to R12.  Sara covers methods for comparing customizations between Release 11i and 12, managing common customization types, managing deprecated technologies, and more. (July 2011) 6. Tuning All Layers of E-Business Suite (Part 1 of 3) (Presentation) Lester Gutierrez, Senior Architect, and Deepak Bhatnagar, Senior Manager, from the E-Business Suite Application Performance team, lead Tuning All Layers of E-Business Suite (Part 1 of 3). This webcast provides an overview of how Oracle E-Business Suite system administrators, DBAs, developers, and implementers can improve E-Business Suite performance by following a performance tuning framework. Part 1 focuses on the performance triage approach, tuning applications modules, upgrade performance best practices, and tuning the database tier. This ATG Live Webcast is an expansion of the performance sessions at conferences that are perennial favourites with hardcore Apps DBAs. (August 2011)  7. Oracle E-Business Suite Directions: Deployment and System Administration (Presentation) Max Arderius, Manager Applications Technology Group, and Ivo Dujmovic, Director Applications Technology group, lead Oracle E-Business Suite Directions: Deployment and System Administration covering important changes in E-Business Suite R12.2. The changes discussed in this presentation include Oracle E-Business Suite architecture, installation, upgrade, WebLogic Server integration, online patching, and cloning. This webcast provides an overview of how Oracle E-Business Suite system administrators, DBAs, developers, and implementers can prepare themselves for these changes in R12.2 of Oracle E-Business Suite. (October 2011) Oracle University Courses For a general listing of all Oracle University courses related to E-Business Suite Technology, use the Oracle University E-Business Suite Technology course catalog link. Oracle University E-Business Suite Technology Course Catalog 1. R12 Oracle Applications System Administrator Fundamentals In this course students learn concepts and functions that are critical to the System Administrator role in implementing and managing the Oracle E-Business Suite. Topics covered include configuring security and user management, configuring flexfields, managing concurrent processing, and setting up other essential features such as profile options and printing. In addition, configuration and maintenance of an Oracle E-Business Suite through Oracle Applications Manager is discussed. Students also learn the fundamentals of Oracle Workflow including its setup. The System Administrator Fundamentals course provides the foundation needed to effectively control security and ensure smooth operations for an E-Business Suite installation. Demonstrations and hands-on practice reinforce the fundamental concepts of configuring an Oracle E-Business Suite, as well as handling day-to-day system administrator tasks. 2. R12.x Install/Patch/Maintain Oracle E-Business Suite This course will be applicable for customers who have implemented Oracle E-Business Suite Release 12 or Oracle E-Business Suite 12.1. This course explains how to go about installing and maintaining an Oracle E-Business Suite Release 12.x system. Both Standard and Express installation types are covered in detail. Maintenance topics include a detailed examination of the standard tools and utilities, and an in-depth look at patching an Oracle E-Business Suite system. After this course, students will be able to make informed decisions about how to install an Oracle E-Business Suite system that meets their specific requirements, and how to maintain the system afterwards. The extensive hands-on practices include performing an installation on a Linux system, navigating the file system to locate key files, running the standard maintenance tools and utilities, applying patches, and carrying out cloning operations. 3. R12.x Extend Oracle Applications: Building OA Framework Applications This class is a hands-on lab-intensive course that will keep the student busy and active for the duration of the course. While the course covers the fundamentals that support OA Framework-based applications, the course is really an exercise in J2EE programming. Over the duration of the course, the student will create an OA Framework-based application that selects, inserts, updates, and deletes data from a R12 Oracle Applications instance. 4. R12.x Extend Oracle Applications: Customizing OA Framework Applications This course has been significantly changed from the prior version to include additional deployments. The course doesn't teach the specifics of configuration of each product. That is left to the product-specific courses. What the course does cover is the general methods of building, personalizing, and extending OA Framework-based pages within the E-Business Suite. Additionally, the course covers the methods to deploy those types of customizations. The course doesn't include discussion of the Oracle Forms-based pages within the E-Business Suite. 5. R12.x Extend Oracle Applications: OA Framework Personalizations Personalization is the ability within an E-Business Suite instance to make changes to the look and behavior of OA Framework-based pages without programming. And, personalizations are likely to survive patches and upgrades, increasing their utility. This course will systematically walk you through the myriad of personalization options, starting with simple examples and increasing in complexity from there. 6. E-Business Suite: BI Publisher 5.6.3 for Developers Starting with the basic concepts, architecture, and underlying standards of Oracle XML Publisher, this course will lead a student through a progress of exercises building their expertise. By the end of the course, the student should be able to create Oracle XML Publisher RTF templates and data templates. They should also be able to deploy and maintain a BI Publisher report in an E-Business Suite instance. Students will also be introduced to Oracle BI Publisher Enterprise. 7. R12.x Implement Oracle Workflow This course provides an overview of the architecture and features of Oracle Workflow and the benefits of using Oracle Workflow in an e-business environment. You can learn how to design workflow processes to automate and streamline business processes, and how to define event subscriptions to perform processing triggered by business events. Students also learn how to respond to workflow notifications, how to administer and monitor workflow processes, and what setup steps are required for Oracle Workflow. Demonstrations and hands-on practice reinforce the fundamental concepts. 8. R12.x Oracle E-Business Suite Essentials for Implementers Oracle R12.1 E-Business Essentials for Implementers is a course that provides a functional foundation for any E-Business Suite Fundamentals course.

    Read the article

  • Can't RDP Into Windows Server After Windows Server Establishes VPN Session

    - by Jennifer Baker
    Hi there. I've setup a Windows 2008 Server in a Cloud Environment. I am able to RDP to this Windows Server ("aka CloudServer") in the Cloud Environment. When I establish PPTP VPN connection from the CloudServer back to our Windows Server ("aka OfficeServer"), my RDP session is dropped and it won't let me RDP back in. The only way how I can RDP to the CloudServer is using the DHCP ip address issued from the OfficeServer. What do I need to change on the CloudServer? Thanks in advance for your help! Jennifer

    Read the article

  • Enabling 32-Bit Applications on IIS7 (also affects 32-bit oledb or odbc drivers) [Solved]

    - by Humprey Cogay, C|EH
    We just bought a new Web Server, after installing Windows 2008 R2(which is a 64bit OS and IIS7), SQL Server Standard 2008 R2 and IBM Client Access for V5R3 with its Dot Net Data Providers, I tried deploying our new project which is fully functional on an IIS6 Based Web Server, I encountered this Error The 'IBMDA400.DataSource.1' provider is not registered on the local machine. To remove the doubt that I still lack some Software Pre-Requesites or version conflicts  since I encountered some erros while installing my IBM Client Access, I created a Connection Tester which is Windows App that accepts a connection string as a parameter and verifies if that parameter is valid. After entering the Proper Conn String I tried hitting the button and the Test was Succesful. So now I trimmed my suspects to My Web App and IIS7. After Googling around I found this post by a Rakki Muthukumar(Microsoft Developer Support Engineer for ASP.NET and IIS7) http://blogs.msdn.com/b/rakkimk/archive/2007/11/03/iis7-running-32-bit-and-64-bit-asp-net-versions-at-the-same-time-on-different-worker-processes.aspx So I tried scouting on IIS7's management console and found this little tweak under the Application Pool where my App is a member of. After changing this parameter to TRUE Yahoo (although I'm a Google kind of person) the Web App Works .......

    Read the article

  • Oracle Coherence & Oracle Service Bus: REST API Integration

    - by Nino Guarnacci
    This post aims to highlight one of the features found in Oracle Coherence which allows it to be easily added and integrated inside a wider variety of projects.  The features in question are the REST API exposed by the Coherence nodes, with which you can interact in the wider mode in memory data grid.Oracle Coherence and Oracle Service Bus are natively integrated through a feature found in the Oracle Service Bus, which allows you to use the coherence grid cache during the configuration phase of a business service. This feature allows you to use an intermediate layer of cache to retrieve the answers from previous invocations of the same service, without necessarily having to invoke the real business service again. Directly from the web console of Oracle Service Bus, you can decide the policies of eviction of the objects / answers and define the discriminating parameters that identify their uniqueness.The coherence REST APIs, however, allow you to integrate both products for other necessities enabling realization of new architectures design.  Consider coherence’s node as a simple service which interoperates through the stardard services and in particular REST (with JSON and XML). Thinking of coherence as a company’s shared service, able to have an implementation of a centralized “map and reduce” which you can access  by a huge variety of protocols (transport and envelopes).An amazing step forward for those who still imagine connectors and code. This type of integration does not require writing custom code or complex implementation to be self-supported. The added value is made unique by the incredible value of both products independently, and still more out of their simple and robust integration.As already mentioned this scenario discovers a hidden new door behind the columns of these two products. The door leads to new ideas and perspectives for enterprise architectures that increasingly wink to next-generation applications: simple and dynamic, perhaps towards the mobile and web 2.0.Below, a small and simple demo useful to demonstrate how easily is to integrate these two products using the Coherence REST API. This demo is also intended to imagine new enterprise architectures using this approach.The idea is to create a centralized system of alerting, fed easily from any company’s application, regardless of the technology with which they were built . Then use a representation standard protocol: RSS, using a service exposed by the service bus; So you can browse and search only the alerts that you are interested on, by category, author, title, date, etc etc.. The steps needed to implement this system are very simple and very few. Here they are listed below and described to be easily replicated within your environment. I would remind you that the demo is only meant to demonstrate how easily is to integrate Oracle Coherence and the Oracle Service Bus, and stimulate your imagination to new technological approaches.1) Install the two products: In this demo used (if necessary, consult the installation guides of 2 products)  - Oracle Service Bus ver. 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html - Oracle Coherence ver. 3.7.1 http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html 2) Because you choose to create a centralized alerting system, we need to define a structure type containing some alerting attributes useful to preserve and organize the information of the various alerts sent by the different applications. Here, then it was built a java class named Alert containing the canonical properties of an alarm information:- Title- Description- System- Time- Severity 3) Therefore, we need to create two configuration files for the coherence node, in order to save the Alert objects within the grid, through the rest/http protocol (more than the native API for Java, C + +, C,. Net). Here are the two minimal configuration files for Coherence:coherence-rest-config.xml resty-server-config.xml This minimum configuration allows me to use a distributed cache named "alerts" that can  also be accessed via http - rest on the host "localhost" over port "8080", objects are of type “oracle.cohsb.Alert”. 4) Below  a simple Java class that represents the type of alert messages: 5) At this point we just need to startup our coherence node, able to listen on http protocol to manage the “alerts” cache, which will receive incoming XML or JSON objects of type Alert. Remember to include in the classpath of the coherence node, the Alert java class and the following coherence libraries and configuration files:  At this point, just run the coherence class node “com.tangosol.net.DefaultCacheServer”advising you to set the following parameters:-Dtangosol.coherence.log.level=9 -Dtangosol.coherence.log=stdout -Dtangosol.coherence.cacheconfig=[PATH_TO_THE_FILE]\resty-server-config.xml 6) Let's create a procedure to test our configuration of Coherence and in order to insert some custom alerts in our cache. The technology with which you want to achieve this functionality is fully not considerable: Javascript, Python, Ruby, Scala, C + +, Java.... Because the protocol to communicate with Coherence is simply HTTP / JSON or XML. For this little demo i choose Java: A method to send/put the alert to the cache: A method to query and view the content of the cache: Finally the main method that execute our methods:  No special library added in the classpath for our class (json struct static defined), when it will be executed, it asks some information such as title, description,... in order to compose and send an alert to the cache and then it will perform an inquiry, to the same cache. At this point, a good exercise at this point, may be to create the same procedure using other technologies, such as a simple html page containing some JavaScript code, and then using Python, Ruby, and so on.7) Now we are ready to start configuring the Oracle Service Bus in order to integrate the two products. First integrate the internal alerting system of Oracle Service Bus with our centralized alerting system based on coherence node. This ensures that by monitoring, or directly from within our Proxy Message Flow, we can throw alerts and save them directly into the Coherence node. To do this I choose to use the jms technology, natively present inside the Oracle Weblogic / Service Bus. Access to the Oracle WebLogic Administration console and create and configure a new JMS connection factory and a new jms destination (queue). Now we should create a new resource of type “alert destination” within our Oracle Service Bus project. The new “alert destination” resource should be configured using the newly created connection factory jms and jms destination. Finally, in order to withdraw the message alert enqueued in our JMS destination and send it to our coherence node, we just need to create a new business service and proxy service within our Oracle Service Bus project.Our business service is responsible for sending a message to our REST service Coherence using as a method action: PUT Finally our proxy service have to collect all messages enqueued on the destination, execute an xquery transformation on those messages  in order to translate them into valid XML / alert objects useful to be sent to our coherence service, through the newly created business service. The message flow pipeline containing the xquery transformation: Incredibly,  we just did a basic first integration between the native alerting system of Oracle Service Bus and our centralized alerting system by simply configuring our coherence node without developing anything.It's time to test it out. To do this I create a proxy service able to generate an alert using our "alert destination", whenever the proxy is invoked. After some invocation to our proxy that generates fake alerts, we could open an Internet browser and type the URL  http://localhost: 8080/alerts/  so we could see what has been inserted within the coherence node. 8) We are ready for the final step.  We would create a new message flow, that can be used to search and display the results in standard mode. To do this I choosen the standard representation of RSS, to display a formatted result on a huge variety of devices such as readers for the iPhone and Android. The inquiry may be defined already at the time of the request able to return only feed / items related to our needs. To do this we need to create a new business service, a new proxy service, and finally a new XQuery Transformation to take care of translating the collection of alerts that will be return from our coherence node in a nicely formatted RSS standard document.So we start right from this resource (xquery), which has the task of transforming a collection of alerts / xml returned from the node coherence in a type well-formatted feed RSS 2.0 our new business service that will search the alerts on our coherence node using the Rest API. And finally, our last resource, the proxy service that will be exposed as an RSS / feeds to various mobile devices and traditional web readers, in which we will intercept any search query, and transform the result returned by the business service in an RSS feed 2.0. The message flow with the transformation phase (Alert TO Feed Items): Finally some little tricks to follow during the routing to the business service, - check for any queries present in the url to require a subset of alerts  - the http header "Accept" to help get an answer XML instead of JSON: In our little demo we also static added some coherence parameters to the request:sort=time:desc;start=0;count=100I would like to get from Coherence that the results will be sorted by date, and starting from 1 up to a maximum of 100.Done!!Just incredible, our centralized alerting system is ready. Inheriting all the qualities and capabilities of the two products involved Oracle Coherence & Oracle Service Bus: - RASP (Reliability, Availability, Scalability, Performance)Now try to use your mobile device, or a normal Internet browser by accessing the RSS just published: Some urls you may test: Search for the last 100 alerts : http://localhost:7001/alarmsSearch for alerts that do not have time set to null (time is not null):http://localhost:7001/alarms?q=time+is+not+nullSearch for alerts that the system property is “Web Browser” (system = ‘Web Browser’):http://localhost:7001/alarms?q=system+%3D+%27Web+Browser%27Search for alerts that the system property is “Web Browser” and the severity property is “Fatal” and the title property contain the word “Javascript”  (system = ‘Web Broser’ and severity = ‘Fatal’ and title like ‘%Javascript%’)http://localhost:8080/alerts?q=system+%3D+%27Web+Browser%27+AND+severity+%3D+%27Fatal%27+AND+title+LIKE+%27%25Javascript%25%27 To compose more complex queries about your need I would suggest you to read the chapter in the coherence documentation inherent the Cohl language (Coherence Query Language) http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/api_cq.htm . Some useful links: - Oracle Coherence REST API Documentation http://download.oracle.com/docs/cd/E24290_01/coh.371/e22839/rest_intro.htm - Oracle Service Bus Documentation http://download.oracle.com/docs/cd/E21764_01/soa.htm#osb - REST explanation from Wikipedia http://en.wikipedia.org/wiki/Representational_state_transfer At this URL could be downloaded the whole materials of this demo http://blogs.oracle.com/slc/resource/cosb/coh-sb-demo.zip Author: Nino Guarnacci.

    Read the article

  • How can I make check_nrpe wait for my remote script to finish executing?

    - by Rauffle
    I have a python script that's being used as a plugin for NRPE. This script checks to see if a process if running on a virtual machine by doing an SSH one-liner with a "ps ax | grep process" attached. When executing the script manually, it works as expected and returns a single line of output for NRPE as well as a status based on whether or not the process is running. When I attempt to run the command setup to execute this script (from my Nagios server), I instantly get the output "NRPE: Unable to read output", however when I run the script manually it takes about a second before it returns output. Other commands run just fine, so it would seem like NRPE needs to wait a second or two for output rather than instantly failing, but I've been unable to find any way of accomplishing this; any tips? Thanks PS: The virtual machines are not accessible from anywhere other than the host machine, hence the need for the nrpe plugin to ssh from the host into the VM to check the process.

    Read the article

  • SYS2 Scripts Updated – Scripts to monitor database backup, database space usage and memory grants now available

    - by Davide Mauri
    I’ve just released three new scripts of my “sys2” script collection that can be found on CodePlex: Project Page: http://sys2dmvs.codeplex.com/ Source Code Download: http://sys2dmvs.codeplex.com/SourceControl/changeset/view/57732 The three new scripts are the following sys2.database_backup_info.sql sys2.query_memory_grants.sql sys2.stp_get_databases_space_used_info.sql Here’s some more details: database_backup_info This script has been made to quickly check if and when backup was done. It will report the last full, differential and log backup date and time for each database. Along with these information you’ll also get some additional metadata that shows if a database is a read-only database and its recovery model: By default it will check only the last seven days, but you can change this value just specifying how many days back you want to check. To analyze the last seven days, and list only the database with FULL recovery model without a log backup select * from sys2.databases_backup_info(default) where recovery_model = 3 and log_backup = 0 To analyze the last fifteen days, and list only the database with FULL recovery model with a differential backup select * from sys2.databases_backup_info(15) where recovery_model = 3 and diff_backup = 1 I just love this script, I use it every time I need to check that backups are not too old and that t-log backup are correctly scheduled. query_memory_grants This is just a wrapper around sys.dm_exec_query_memory_grants that enriches the default result set with the text of the query for which memory has been granted or is waiting for a memory grant and, optionally, its execution plan stp_get_databases_space_used_info This is a stored procedure that list all the available databases and for each one the overall size, the used space within that size, the maximum size it may reach and the auto grow options. This is another script I use every day in order to be able to monitor, track and forecast database space usage. As usual feedbacks and suggestions are more than welcome!

    Read the article

  • Linux Email Server Auto-Reply

    - by Robert Smith
    I need to setup a mail server that has the following functionality: if a user sends an email to a specific address on this server, the server must first check if the email has a PDF attachment, do some processing to that PDF file and then reply to the user's initial mail with the new PDF file attached. My question is how would it be possible to achieve this functionality, and what software / mail server do you recommend? I'm thinking that it can be solved the following way: when the server receives a new email it executes an external Python script that checks the attachment, processes the PDF file and then sends it back in the user's mailbox. What mail server would be able to do this, and what configurations does it need?

    Read the article

  • Is HAProxy able to pass SSL requests to Apache + mod_ssl?

    - by Josh Smeaton
    Most of the documentation I've read regarding HAProxy and SSL seems to suggest that SSL must be handled before it reaches HAProxy. Most solutions focus on using stunnel, and a few suggest Apache + mod_ssl infront of HAProxy. Our problem though, is that we use Apache as a reverse proxy to a number of other sites which use their own certificates. Ideally what we'd like, is for HAProxy to pass all SSL traffic to Apache, and let Apache handle either the SSL or reverse proxying. Our current setup: Apache Reverse Proxy -> Apache + mod_ssl -> Application What I'd like to do: HAProxy -> Apache Reverse Proxy -> Apache + mod_ssl -> Application Is it possible to do this? Is HAProxy capable of forwarding SSL traffic to be handled by a server BEHIND it?

    Read the article

  • Best book for learning linux shell scripting?

    - by chakrit
    I normally works on Windows machines but on some occasions I do switch to development on linux. And my most recent project will be written entirely on a certain linix platforms (not the standard Apache/MySQL/PHP setup). So I thought it would pay to learn to write some linux automation script now. I can get around the system, start/stop services, compile/install stuffs fine. Those are probably basic drills for a programmer. But if, for example, I wanted to deploy a certain application automatically to a newly minted linux machine every month I'd love to know how to do it. So if I wanted to learn serious linux shell scripting, what book should I be reading? Thanks

    Read the article

< Previous Page | 918 919 920 921 922 923 924 925 926 927 928 929  | Next Page >