Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 738/764 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • Sync Your Pidgin Profile Across Multiple PCs with Dropbox

    - by Matthew Guay
    Pidgin is definitely our favorite universal chat client, but adding all of your chat accounts to multiple computers can be frustrating.  Here’s how you can easily transfer your Pidgin settings to other computers and keep them in sync using Dropbox. Getting Started Make sure you have both Pidgin and Dropbox installed on any computers you want to sync.  To sync Pidgin, you need to: Move your Pidgin profile folder on your first computer to Dropbox Create a symbolic link from the new folder in Dropbox to your old profile location Delete the default pidgin profile on your other computer, and create a symbolic link from your Dropbox Pidgin profile to the default Pidgin profile location This sounds difficult, but it’s actually easy if you follow these steps.  Here we already had all of our accounts setup in Pidgin in Windows 7, and then synced this profile with an Ubuntu and a XP computer with fresh Pidgin installs.  Our instructions for each OS are based on this, but just swap the sync order if your main Pidgin install is in XP or Ubuntu. Please Note:  Please make sure Pidgin isn’t running on your computer while you are making the changes! Sync Your Pidgin Profile from Windows 7 Here is Pidgin with our accounts already setup.  Our Pidgin profile has a Gtalk, MSN Messenger, and Facebook Chat account, and lots of log files. Let’s move this profile to Dropbox to keep it synced.  Exit Pidgin, and then enter %appdata% in the address bar in Explorer or press Win+R and enter %appdata%.  Select the .purple folder, which is your Pidgin profiles and settings folder, and press Ctrl+X to cut it. Browse to your Dropbox folder, and press Ctrl+V to paste the .purple folder there. Now we need to create the symbolic link.  Enter  “command” in your Start menu search, right-click on the Command Prompt shortcut, and select “Run as administrator”. We can now use the mklink command to create a symbolic link to the .purple folder.  In Command Prompt, enter the following and substitute username for your own username. mklink /D “C:\Users\username\Documents\My Dropbox\.purple” “C:\Users\username\AppData\Roaming\.purple” And that’s it!  You can open Pidgin now to make sure it still works as before, with your files being synced with Dropbox. Please Note:  These instructions work the same for Windows Vista.  Also, if you are syncing settings from another computer to Windows 7, then delete the .purple folder instead of cutting and pasting it, and reverse the order of the file paths when creating the symbolic link. Add your Pidgin Profile to Ubuntu Our Ubuntu computer had a clean install of Pidgin, so we didn’t need any of the information in its settings.  If you’ve run Pidgin, even without creating an account, you will need to first remove its settings folder.  Open your home folder, and click View, and then “Show Hidden Files” to see your settings folders. Select the .purple folder, and delete it. Now, to create the symbolic link, open Terminal and enter the following, substituting username for your username: ln –s /home/username/Dropbox/.purple /home/username/ Open Pidgin, and you will see all of your accounts that were on your other computer.  No usernames or passwords needed; everything is setup and ready to go.  Even your status is synced; we had our status set to Away in Windows 7, and it automatically came up the same in Ubuntu. Please Note: If your primary Pidgin account is in Ubuntu, then cut your .purple folder and paste it into your Dropbox folder instead.  Then, when creating the symbolic link, reverse the order of the folder paths. Add your Pidgin Profile to Windows XP In XP we also had a clean install of Pidgin.  If you’ve run Pidgin, even without creating an account, you will need to first remove its settings folder.  Click Start, the Run, and enter %appdata%. Delete your .purple folder. XP does not include a way to create a symbolic link, so we will use the free Junction tool from Sysinternals.  Download Junction (link below) and unzip the folder. Open Command Prompt (click Start, select All Programs, then Accessories, and select Command Prompt), and enter cd followed by the path of the folder where you saved Junction.   Now, to create the symbolic link, enter the following in Command Prompt, substituting username with your username. junction –d “C:\Documents and Settings\username\Application Data\.purple” “C:\Documents and Settings\username\My Documents\My Dropbox\.purple” Open Pidgin, and you will see all of your settings just as they were on your other computer.  Everything’s ready to go.   Please Note: If your primary Pidgin account is in Windows XP, then cut your .purple folder and paste it into your Dropbox folder instead.  Then, when creating the symbolic link, reverse the order of the folder paths. Conclusion This is a great way to keep all of your chat and IM accounts available from all of your computers.  You can easily access logs from chats you had on your desktop from your laptop, or if you add a chat account on your work computer you can use it seamlessly from your home computer that evening.  Now Pidgin is the universal chat client that is always ready whenever and wherever you need it! Links Downlaod Pidgin Download and signup for Dropbox Download Junction for XP Similar Articles Productive Geek Tips Add "My Dropbox" to Your Windows 7 Start MenuUse Multiple Firefox Profiles at the Same TimeEasily Add Facebook Chat to PidginPut Your Pidgin Buddy List into the Windows Vista SidebarBackup and Restore Firefox Profiles Easily TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3

    Read the article

  • How to Configure Windows Machine to Allow File Sharing with DNS Alias

    - by Michael Ferrante
    I have not seen a single article posted anywhere online that brings together all the settings one would need to do to make this work properly on Windows, so I thought I would post it here. To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0, add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN. To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool (setspn.exe). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Automating Form Login

    - by Greg_Gutkin
    Introduction A common task in configuring a web application for proxying in Pagelet Producer is setting up form autologin. PP provides a wizard-like tool for detecting the login form fields, but this is usually only the first step in configuring this feature. If the generated configuration doesn't seem to work, some additional manual modifications will be needed to complete the setup. This article will try to guide you through this process while steering you away from common pitfalls. For the purposes of this article, let's assume the following characteristics about your environment: Web Application Base URL: http://host/app (configured as Resource Source URL in PP) Pagelet Producer Base URL: http://pp/pagelets Form Field Auto-Detection Form Autologin is configured in the PP Admin UI under resource_name/Autologin/Form Login. First, you'll enter the URL to the login form under "Login Form Identification". This will enable the admin wizard to connect to and display the login page. Caution: RedirectsMake sure the entered URL matches what you see in the browser's address bar, when the application login page is displayed. For example, even though you may be able to reach the login page by simply typing http://host/app, the URL you end up on may change to http://host/app/login via browser redirect(s).The second URL is the one you will want to use. Caution: External Login ServersThe login page may actually come from a different server than the application you are trying to proxy. For example, you may notice that the login page URL changes to http://hostB/appB. This is common when external SSO products are involved. There are two ways of dealing with this situation. One is to configure Pagelet Producer to participate in SSO. This approach is out of scope of this article and is discussed in a separate whitepaper (TODO add link). The second approach is to use the autologin feature to provide stored credentials to the SSO login form. Since the login form URL is not an extension of the application base URL (PP resource URL), you will need to add a new PP resource for the SSO server and configure the login form on that resource instead of the original application resource. One side benefit of this additional resource is that it can reused for other applications relying on the same SSO server for login. After entering the login page URL (make sure dropdown says "URL"), click "Automatically Detect Form Fields". This will bring up the web app's login page in a new browser window. Fill it out and submit it as you would normally. If everything goes right, Pagelet Producer will intercept the submitted values and fill out all the needed configuration data in the Admin UI. If the login form window doesn't close or configuration data doesn't get filled in, you may have not entered the login page URL correctly. Review the two cautionary notes above and make any necessary changes. If the form fields got filled automatically, it's time to save the configuration and test it out. If you can access a protected area of the backend application via a proxied PP URL without filling out its login form, then you are pretty much done with login form configuration. The only other step you will need to complete before declaring this aspect of configuration production ready is configuring form field source. You may skip to that section below. Manual Login Form Identification Let's take a closer look at Login Form Identification. This determines how Pagelet Producer recognizes login forms as such. URL The most efficient way of detecting login forms is by looking at the page URL. This method can only be used under the following conditions: Login page URL must be different from the post login application URLs. Login page URL must stay constant regardless of the path it takes to reach the page. For example, reaching the login page by going to the application base URL or to a specific protected URL must result in a redirect to the same login page URL (query string excluded). If only the query string parameters change, just leave out the query string from the configured login page URL. If either of these conditions is not fullfilled, you must switch to the RegEx approach below. RegEx If the login page URL is not uniform enough across all scenarios or is indistinguishable from other page locations, PP can be configured to recognize it by looking at the page markup itself. This is accomplished by changing the dropdown to "RegEx". If regular expressions scare you, take comfort from the fact that in most cases you won't need to enter any special regex characters. Let's look at an example: Say you have a login form that looks like <form id='loginForm' action='login?from=pageA' > <input id='user'> <input id='pass'> </form> Since this form has an id attribute, you can be reasonably sure that this login form can be uniquely identified across the web application by this snippet: "id='loginForm'". (Unless, of course your backend web application contains login forms to other apps). Since no wildcards are needed to find this snippet, you can just enter it as is into the RegEx field - no special regular expression characters needed! If the web developer who created the form wasn't kind enough to provide a unique id, you will need to look for other snippets of the page to uniquely identify it. It could be the action URL, an input field id, or some other markup fragment. You should abstain from using UI text as an identifier it may change in translated versions of the page and prevent the login page logic from working for international users. You may need to turn to regular expression wildcard syntax if no simple matches work. For more information on regular expression, refer to the Resources section. Form Submit Location Now we'll look at the form submit location. If the captured URL contains query string parameters that will likely change from one form submission to the next, you will need to change its type to RegEx. This type will tell Pagelet Producer to parse the login page for the action URL and submit to the value found. The regular expression needs to point at the actual action URL with its first grouping expression. Taking the example form definition above, the form submit location regex would be: action='(.*?)' The parentheses are used to identify the actual action URL, while the rest of the expression provides the context for finding it. Expression .*? is a so-called reluctant wildcard that matches any character excluding the single quote that follows. See Resources section below for further information on regular expressions. Manual Form Field Detection If the Admin UI form field detection wizard fails to populate login form configuration page, you will have to enter the fields by hand. Use a built-in browser developer tool or addon (e.g. Firebug) to inspect the form element and its children input elements. For each input element (including hidden elements), create an entry under Form Fields. Change its Source according to the next section. Form Field Source Change the source of any of the fields not exposed to the users of the login form (i.e. hidden fields) to "Generated". This means Pagelet Producer will just use the values returned by the web app rather than supplying values it stored. For fields that contain sensitive data or vary from user to user (e.g. username & password), change the source to User (Credential) Vault. Logging Support To help you troubleshoot you autologin configuration, PP provides some useful logging support. To turn on detailed logging for the autologin feature, navigate to Settings in Admin UI. Under Logging, change the log level for AutoLogin to Finest. Known Limitations Autologin feature may not work as expected if login form fields (not just the values, but the DOM elements themselves) are generated dynamically by client side JavaScript. Resources RegEx RegEx Reference from Java RegEx Test Tool

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Using Razor together with ASP.NET Web API

    - by Fredrik N
    On the blog post “If Then, If Then, If Then, MVC” I found the following code example: [HttpGet]public ActionResult List() { var list = new[] { "John", "Pete", "Ben" }; if (Request.AcceptTypes.Contains("application/json")) { return Json(list, JsonRequestBehavior.AllowGet); } if (Request.IsAjaxRequest()) [ return PartialView("_List", list); } return View(list); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is a ASP.NET MVC Controller where it reuse the same “business” code but returns JSON if the request require JSON, a partial view when the request is an AJAX request or a normal ASP.NET MVC View. The above code may have several reasons to be changed, and also do several things, the code is not closed for modifications. To extend the code with a new way of presenting the model, the code need to be modified. So I started to think about how the above code could be rewritten so it will follow the Single Responsibility and open-close principle. I came up with the following result and with the use of ASP.NET Web API: public String[] Get() { return new[] { "John", "Pete", "Ben" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   It just returns the model, nothing more. The code will do one thing and it will do it well. But it will not solve the problem when it comes to return Views. If we use the ASP.NET Web Api we can get the result as JSON or XML, but not as a partial view or as a ASP.NET MVC view. Wouldn’t it be nice if we could do the following against the Get() method?   Accept: application/json JSON will be returned – Already part of the Web API   Accept: text/html Returns the model as HTML by using a View   The best thing, it’s possible!   By using the RazorEngine I created a custom MediaTypeFormatter (RazorFormatter, code at the end of this blog post) and associate it with the media type “text/html”. I decided to use convention before configuration to decide which Razor view should be used to render the model. To register the formatter I added the following code to Global.asax: GlobalConfiguration.Configuration.Formatters.Add(new RazorFormatter()); Here is an example of a ApiController that just simply returns a model: using System.Web.Http; namespace WebApiRazor.Controllers { public class CustomersController : ApiController { // GET api/values public Customer Get() { return new Customer { Name = "John Doe", Country = "Sweden" }; } } public class Customer { public string Name { get; set; } public string Country { get; set; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Because I decided to use convention before configuration I only need to add a view with the same name as the model, Customer.cshtml, here is the example of the View:   <!DOCTYPE html> <html> <head> <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.min.js" type="text/javascript"></script> </head> <body> <div id="body"> <section> <div> <hgroup> <h1>Welcome '@Model.Name' to ASP.NET Web API Razor Formatter!</h1> </hgroup> </div> <p> Using the same URL "api/values" but using AJAX: <button>Press to show content!</button> </p> <p> </p> </section> </div> </body> <script type="text/javascript"> $("button").click(function () { $.ajax({ url: '/api/values', type: "GET", contentType: "application/json; charset=utf-8", success: function(data, status, xhr) { alert(data.Name); }, error: function(xhr, status, error) { alert(error); }}); }); </script> </html> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now when I open up a browser and enter the following URL: http://localhost/api/customers the above View will be displayed and it will render the model the ApiController returns. If I use Ajax against the same ApiController with the content type set to “json”, the ApiController will now return the model as JSON. Here is a part of a really early prototype of the Razor formatter (The code is far from perfect, just use it for testing). I will rewrite the code and also make it possible to specify an attribute to the returned model, so it can decide which view to be used when the media type is “text/html”, but by default the formatter will use convention: using System; using System.Net.Http.Formatting; namespace WebApiRazor.Models { using System.IO; using System.Net; using System.Net.Http.Headers; using System.Reflection; using System.Threading.Tasks; using RazorEngine; public class RazorFormatter : MediaTypeFormatter { public RazorFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/html")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/xhtml+xml")); } //... public override Task WriteToStreamAsync( Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { var task = Task.Factory.StartNew(() => { var viewPath = // Get path to the view by the name of the type var template = File.ReadAllText(viewPath); Razor.Compile(template, type, type.Name); var razor = Razor.Run(type.Name, value); var buf = System.Text.Encoding.Default.GetBytes(razor); stream.Write(buf, 0, buf.Length); stream.Flush(); }); return task; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Summary By using formatters and the ASP.NET Web API we can easily just extend our code without doing any changes to our ApiControllers when we want to return a new format. This blog post just showed how we can extend the Web API to use Razor to format a returned model into HTML.   If you want to know when I will post more blog posts, please feel free to follow me on twitter:   @fredrikn

    Read the article

  • Building services with .Net Part 1

    - by Allan Rwakatungu
    On the 26th of May 2010 , I made a presentation to the .NET user group meeting (thanks to Malisa Ncube for organizing this event every month … ). If you missed my presentation , we talked about why we should all be building services … better still using the .NET framework. This blog post is an introduction to services , why you would want to build services and how you can build services using the .NET framework. What is a service? OASIS defines service as "a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description." [1]. If the above definition sounds to academic , you can also define a service as loosely coupled units of functionality that have no calls to each other embedded in the. Instead of services embedding calls to each other in their service code they use defined protocols that describe how services pass and parse messages. This is a good way to think about services if you’re from an objected oriented background. While in object oriented programming functions make calls to each other, in service oriented programming, functions pass messages between each other. Why would you want to use services? 1. If your enterprise architecture looks like this   Services are the building blocks for SOA . With SOA you can move away from the sphaggetti infrastructure that is common in most enterprises. The complexity or lack of visibility of the integration points in your enterprises makes it difficult and costly to implement new initiatives and changes into the business - and even impossible in some cases - as it is not possible to identify the impact a change in one system might have to other systems. With services you can move to an architecture like this Your building blocks from Spaghetti infrastructure to something that is more well-defined and manageable to achieve cost efficiency and not least business agility - enabling you to react to changes in the market with speed and achieve operational efficiency and control are services. 2. If you want to become the Gates or Zuckerburger. Have you heard about Web 2.0 ? Mashups? Software as a service (SAAS) ? Cloud computing ?   They all offer you the opportunity to have scalable but low cost business models and they built using services.  Some of my favorite companies that leverage services for their business models include  https://www.salesforce.com/ (cloud CRM) http://www. twitter.com (more people use twitter clients built by 3rd parties than their official clients) http://www.kayak.com/ (compares data from other travel sites to give information to users in one location) Services with the .NET framework      If you are a .NET developer and you want to develop services, Windows Communication Framework (WCF) is the tool for you. WCF is Microsoft’s unified programming model (service model) for building service oriented applications. ( Before .NET 3.0 you had several models for programming services in .NET including .NET remoting, Web services (ASMX), COM +, Microsoft Messaging queuing (MSMQ) etc, after .NET 3.0 the programming model was unified into one i.e. WCF ). Windows Communication Framework (WCF) provides you 1. An Software Development Kit (SDK) for creating SOA applications 2. A runtime for running services on the Windows platform Why should you use Windows Communication Foundation if you’re programming services?   1. It supports interoperable and open standards e.g. WS* protocols for programming SOAP services 2. It has a unified programming model. Whether you use TCP or Http or Pipes or transmitting using Messaging Queues, programmers need to learn just one way to program. Previously you had .NET remoting, MSMQ, Web services, COM+ and they were all done differently 3. Productive programming model You don’t have to worry about all the plumbing involved to write services. You have a rich declarative programming model to add stuff like logging, transactions, and reliable messages in-built in the Windows Communication Framework. Understanding services in WCF The basic principles of WCF are as easy as ABC A – Address This is where the service is located B- Binding This describes how you communicate with the service e.g. Use TCP, HTTP or both. How to exchange security information with the service etc. C – Contract This defines what the service can do. E.g. Pay water bill, Make a phone call A - Addresses In WCF, an address is a combination of transport, server name, port and path Example addresses may include http://localhost:8001 net.tcp://localhost:8002/MyService net.pipe://localhost/MyPipe net.msmq://localhost/private/MyService net.msmq://localhost/MyService B- Binding   There are numerous ways to communicate with services , different ways that a message can be formatted/sent/secured, that allows you to tailor your service for the compatibility/performance you require for your solution. Transport You can use HTTP TCP MSMQ , Named pipes, Your own custom transport etc Message You  can send a plain text binary, Message Transmission Optimization Mechanism (MTOM) message Communication security No security Transport security Message security Authenticating and authorizing callers etc Behaviour You service can support Transactions Be reliable Use queues Support ajax etc C - Contract You define what your service can do using Service contracts :- Define operations that your service can do, communications and behaviours Data contracts :- Define the messages that are passed from and into your service and how they are formatted Fault contracts :- Defines errors types in your service   As an example, suppose your service service shows money. You define your service contract using a interface [ServiceContract] public interface IShowMeTheMoney {   [OperationContract]    Money Show(); } You define the data contract by annotating a class it with the Data Contract attribute and fields you want to pass in the message as Data Members. (Note:- In the latest versions of WCF you dont have to use attributes if you passing all the objects properties in the message) [DataContract] public Money {   [DataMember]   public string Currency { get; set; }   [DataMember]   public Decimal Amount { get; set; }   public string Comment { get; set; } } Features of Windows Communication Foundation Windows Communication Foundation is not only simple but feature rich , offering you several options to tweak your service to fit your business requirements. Some of the features of WCF include 1. Workflow services You can combine WCF with Windows WorkFlow Foundation (WWF) to write workflow type services 2. Control how your data (messages) are transferred and serialized e.g. you can serialize your business objects as XML or binary 3. control over session management , instance creation and concurrency management without writing code if you like 4. Queues and reliable sessions. You can store messages from the sending client and later forward them to the receiving application. You can also guarantee that messages will arrive at their destincation. 5.Transactions:  You can have different services participate in a transaction operations that can be rolled back if needed 6. Security. WCF has rich features for authorization and authentication  as well as keep audit trails 7. Web programming model. WCF allows developers to expose services as non SOAP endpoints 8. Inbuilt features that you can use to write JSON and services that support AJAX applications And lots more In my next blog I will show you how you can use WCF features to write a real world business service.               Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ]] /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SSH new connection begins to hang (not reject or terminate) after a day or so on Ubuntu 13.04 server

    - by kross
    Recently we upgraded the server from 12.04 LTS server to 13.04. All was well, including after a reboot. With all packages updated we began to see a strange issue, ssh works for a day or so (unclear on timing) then a later request for SSH hangs (cannot ctrl+c, nothing). It is up and serving webserver traffic etc. Port 22 is open (ips etc altered slightly for posting): nmap -T4 -A x.acme.com Starting Nmap 6.40 ( http://nmap.org ) at 2013-09-12 16:01 CDT Nmap scan report for x.acme.com (69.137.56.18) Host is up (0.026s latency). rDNS record for 69.137.56.18: c-69-137-56-18.hsd1.tn.provider.net Not shown: 998 filtered ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 6.1p1 Debian 4 (protocol 2.0) | ssh-hostkey: 1024 54:d3:e3:38:44:f4:20:a4:e7:42:49:d0:a7:f1:3e:21 (DSA) | 2048 dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 (RSA) |_256 45:69:10:79:5a:9f:0b:f0:66:15:39:87:b9:a1:37:f7 (ECDSA) 80/tcp open http Jetty 7.6.2.v20120308 | http-title: Log in as a Bamboo user - Atlassian Bamboo |_Requested resource was http://x.acme.com/userlogin!default.action;jsessionid=19v135zn8cl1tgso28fse4d50?os_destination=%2Fstart.action Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 12.89 seconds Here is the ssh -vvv: ssh -vvv x.acme.com OpenSSH_5.9p1, OpenSSL 0.9.8x 10 May 2012 debug1: Reading configuration data /Users/tfergeson/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to x.acme.com [69.137.56.18] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/Users/tfergeson/.ssh/id_rsa" as a RSA1 public key debug1: identity file /Users/tfergeson/.ssh/id_rsa type 1 debug1: identity file /Users/tfergeson/.ssh/id_rsa-cert type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 130/256 debug2: bits set: 503/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "69.137.56.18" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:6 debug3: load_hostkeys: loaded 1 keys debug1: Host 'x.acme.com' is known and matches the RSA host key. debug1: Found key in /Users/tfergeson/.ssh/known_hosts:10 debug2: bits set: 493/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/tfergeson/.ssh/id_rsa (0x7ff189c1d7d0) debug2: key: /Users/tfergeson/.ssh/id_dsa (0x0) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/tfergeson/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug3: sign_and_send_pubkey: RSA 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to x.acme.com ([69.137.56.18]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ATLAS_OPTS debug3: Ignored env rvm_bin_path debug3: Ignored env TERM_PROGRAM debug3: Ignored env GEM_HOME debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env CLICOLOR debug3: Ignored env IRBRC debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env MY_RUBY_HOME debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env rvm_path debug3: Ignored env COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/tfergeson/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET debug3: Ignored env JPDA_ADDRESS debug3: Ignored env APDK_HOME debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env rvm_sticky_flag debug3: Ignored env MAVEN_OPTS debug3: Ignored env LSCOLORS debug3: Ignored env rvm_prefix debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env JAVA_HOME debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env JPDA_TRANSPORT debug3: Ignored env rvm_version debug3: Ignored env M2_HOME debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env rvm_ruby_string debug3: Ignored env LOGNAME debug3: Ignored env M2_REPO debug3: Ignored env GEM_PATH debug3: Ignored env AWS_RDS_HOME debug3: Ignored env rvm_delete_flag debug3: Ignored env EC2_PRIVATE_KEY debug3: Ignored env RUBY_VERSION debug3: Ignored env SECURITYSESSIONID debug3: Ignored env EC2_CERT debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 I can hard reboot (only mac monitors at that location) and it will again be accessible. This now happens every single time. It is imperative that I get it sorted. The strange thing is that it behaves initially then starts to hang after several hours. I perused logs previously and nothing stood out. From the auth.log, I can see that it has allowed me in, but still I get nothing back on the client side: Sep 20 12:47:50 cbear sshd[25376]: Accepted publickey for tfergeson from 10.1.10.14 port 54631 ssh2 Sep 20 12:47:50 cbear sshd[25376]: pam_unix(sshd:session): session opened for user tfergeson by (uid=0) UPDATES: Still occurring even after setting UseDNS no and commenting out #session optional pam_mail.so standard noenv This does not appear to be a network/dns related issue, as all services running on the machine are as responsive and accessible as ever, with the exception of sshd. Any thoughts on where to start?

    Read the article

  • How to create multiboot flash drive

    - by Nrew
    I've found a guide here: http://www.pendrivelinux.com/boot-multiple-iso-from-usb-multiboot-usb/ And found this menu.lst in my flash drive, which seems to be the one that I'm seeing when I boot using my flash drive: # This Menu Created by Lance http://www.pendrivelinux.com # Ongoing Suggested Menu Entries and the Suggestor are noted! default 0 timeout 30 color NORMAL HIGHLIGHT HELPTEXT HEADING splashimage=(hd0,0)/splash.xpm.gz foreground=FFFFFF background=0066FF title Memtest86+ find --set-root /memtest86+-4.00.iso map --mem /memtest86+-4.00.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by madprofessor title Boot Clonezilla root (hd0,0) kernel /clonezilla/live/vmlinuz live-media-path=clonezilla/live bootfrom=/dev/sd boot=live union=aufs noprompt ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_lang="" vga=791 ip=frommedia initrd /clonezilla/live/initrd.img title Parted Magic 4.9 (Partition Tools) find --set-root /pmagic-4.9.iso map /pmagic-4.9.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Deb title Partition Wizard 4.2 (Partition Tools) find --set-root /pwhe42.iso map /pwhe42.iso (hd32) map --hook root (hd32) chainloader (hd32) title Balder DOS image (FreeDOS) map --unsafe-boot /balder10.img (fd0) map --hook chainloader --force (fd0)+1 rootnoverify (fd0) # Suggested by Szymon Silski title Linux Mint 8 find --set-root /LinuxMint-8.iso map /LinuxMint-8.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper persistent iso-scan/filename=/LinuxMint-8.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 find --set-root /ubuntu-10.04-desktop-i386.iso map /ubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 10.04 (XFCE Desktop) find --set-root /xubuntu-10.04-desktop-i386.iso map /xubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 10.04 (KDE Desktop) find --set-root /kubuntu-10.04-desktop-i386.iso map /kubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz # Suggested by Ambriel title Lubuntu 10.04 (LXDE Lightweight Desktop) find --set-root /lubuntu-10.04.iso map /lubuntu-10.04.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/lubuntu.seed boot=casper persistent iso-scan/filename=/lubuntu-10.04.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Netbook Remix (NetBook Distro) find --set-root /ubuntu-10.04-netbook-i386.iso map /ubuntu-10.04-netbook-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-netbook-i386.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Server Edition Installer (32 bit Installer Only) find --set-root /ubuntu-10.04-server-i386.iso map /ubuntu-10.04-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-10.04-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 find --set-root /ubuntu-9.10-desktop-i386.iso map /ubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 9.10 find --set-root /xubuntu-9.10-desktop-i386.iso map /xubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 9.10 find --set-root /kubuntu-9.10-desktop-i386.iso map /kubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz # Ubuntu Server and Netbook Remix suggested by Wojciech Holek title Ubuntu 9.10 Server Edition Installer (Installer Only) find --set-root /ubuntu-9.10-server-i386.iso map /ubuntu-9.10-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-9.10-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 Netbook Remix (NetBook Distro) find --set-root /ubuntu-9.10-netbook-remix-i386.iso map /ubuntu-9.10-netbook-remix-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-netbook-remix-i386.iso splash initrd /casper/initrd.lz title Ubuntu 9.10 Rescue Remix (Recovery Tools) find --set-root /ubuntu-rescue-remix-9-10-revision1.iso map /ubuntu-rescue-remix-9-10-revision1.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper iso-scan/filename=/ubuntu-rescue-remix-9-10-revision1.iso splash initrd /casper/initrd.lz title DSL 4.4.10 find --set-root /dsl-4.4.10-initrd.iso map --mem /dsl-4.4.10-initrd.iso (hd32) map --hook root (hd32) chainloader (hd32) title AVG Rescue CD (Anti-Virus + Anti-Spyware) find --set-root /avg_arl_en_90_100114.iso map /avg_arl_en_90_100114.iso (hd32) map --hook chainloader (hd32) title Ultimate Boot CD 4.11 find --set-root /ubcd411.iso map /ubcd411.iso (hd32) map --hook chainloader (hd32) title OphCrack XP 2.3.1 (XP Password Cracker) find --set-root /ophcrack-xp-livecd-2.3.1.iso map /ophcrack-xp-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz title OphCrack Vista 2.3.1 (Vista Password Cracker) find --set-root /ophcrack-vista-livecd-2.3.1.iso map /ophcrack-vista-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz # Suggested by Greg Steer title Offline NT Password & Registy Editor find --set-root /cd080802.iso map /cd080802.iso (hd32) map --hook chainloader (hd32) title SliTaz 2.0 find --set-root /slitaz-2.0.iso map --mem /slitaz-2.0.iso (hd32) map --hook chainloader (hd32) title Riplinux 9.3 find --set-root /RIPLinuX-9.3.iso map --heads=0 --sectors-per-track=0 /RIPLinuX-9.3.iso (0xff) || map --heads=0 --sectors-per-track=0 --mem /RIPLinuX-9.3.iso (0xff) map --hook chainloader (0xff) # Suggested by Sunny title YlmF (Windows Like OS) find --set-root /YlmF_OS_EN_v1.0.iso map /YlmF_OS_EN_v1.0.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/YlmF_OS_EN_v1.0.iso splash initrd /casper/initrd.lz # Suggested by Martin Andersson title DBAN 1.0.7 (Drive Nuker) find --set-root /dban-1.0.7_i386.iso map --mem /dban-1.0.7_i386.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Robin McGough title xPUD 0.9.2 (NetBook Distro) find --set-root --ignore-floppies --ignore-cd /xpud-0.9.2.iso map --heads=0 --sectors-per-track=0 /xpud-0.9.2.iso (hd32) map --hook chainloader (hd32) title Puppy 4.3.1 find --set-root /puppy/pup-431.sfs kernel /puppy/vmlinuz initrd /puppy/initrd.gz # Suggested by Relst title Run a Linux OS from the Internet kernel /gpxe.lkrn I also put some .iso files for os installers (Windows xp sp2 and Ubuntu 10.04) But they didn't show up in the list when I booted Do I need to: extract the .iso files and put in in their respective folders? Add the os that I added on the menu.lst? How do I add the iso image(os) in the menu.lst? Before adding the .iso files I first made a folder named Windows xp sp2 then placed the .iso files in there. Please help, I think I need to add the folder name or the file name on the menu.lst but I don't know how

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • CodePlex Daily Summary for Sunday, February 20, 2011

    CodePlex Daily Summary for Sunday, February 20, 2011Popular ReleasesView Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.119.18): Initial releaseWindows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Advanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...SQL Server Process Info Log: Release: releaseAllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedNew Projects.NET Core Audio APIs: This project provides a lightweight set of .NET wrappers around each section of the Windows Core Audio APIs.Akwen: Akwen aspirantbigserve: made in load sof languages a serve ralternative to iis and apacheCathedral game framework: (Will eventually be) A framework for creating 2d region-based games in Silverlight.Gmail Like File Uploader In .NET: Many times I wounder, How does gmail uploads files and monitor the upload status. I suppose many of you also have gone through the same experience. Most of you knows that the Choose File button on the Gmail Page is actually a flash button but how about the progressbar? AJAX rightinicomInterface: ????ini???????COM??iOnlineExamCenter - Trung Tâm Thi Tr?c Tuy?n: Tìm hi?u các phuong pháp thi tr?c nghi?m . Áp d?ng xây d?ng trung tâm thi tr?c tuy?n. ?ng d?ng du?c xây d?ng d?a trên n?n t?ng - .NET Framework v4.0 - ASP.NET MVC v3 - Entity Framework v4 - Silverlight v4 - jQueryLee Utility Library: Lee Utility LibrarySocialCore: Social Core is a framework for developing Social applications using a distributed infrastructure. The focus of Social Core is on collaboration, security and transparency. Privacy is not dead, it just is not implemented correctly.TestAmazonCloud: tacTftp.Net: This projects implements the TFTP (Trivial File Transfer) protocol for .NET in an easy-to-use library. It allows you to integrate TFTP client and server functionality into your project. It is s table, unit-tested and comes with a sample TFTP client and server.Using the Repository Pattern and DI to switch between SQL and XML: This is an example application, showing how Dependency Injection and the Repository can be used to allow interchangeable persistence between XML and SQLWebForms Razor converter - WinForms: Telerik developed a command line "WebForms to Razor (CSHTML) conversion tool" and uploaded in git -https://github.com/telerik/razor-converter This project uses the same Telerik API, but the client is the GUI/WinForms and allow developer to convert multiple files in a one go.XNA and Dependency Injection: Test code sample for XNA and Dependency Injection.XNA and IoC Container: Test code sample for XNA and IoC Container.XNA and Unit Testing: Test code sample for XNA and Unit Testing.Xplore World: Xplore World es un navegador web avanzado disponible para Windows Xp/Vista/7 y actualmente solo está disponible en español. Con este navegador puedes hacer cosas maravillosas como ver imágenes de cualquier web a tu estilo, administrar los usuarios, modificar tus marcadores, etc.ZO (ZeroOne): ZeroOne is the editor for programmers who think in binary.

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    I'm new to ofbiz so try to keep your answer as simple as possibly. If you can give examples that would be kind. My problem is I created a project inside the ofbiz/hot-deploy folder namely productionmgntSystem. Inside the folder ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem I created a file app_details_1.ftl. The following are the code of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server. The file is get from user " tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1". But the file is not uploaded. Give me a good solution for uploading a file to the server.

    Read the article

  • CodePlex Daily Summary for Wednesday, June 09, 2010

    CodePlex Daily Summary for Wednesday, June 09, 2010New Projects.NET Transactional File Manager: Transactional File Manager is a .NET API that supports including file system operations such as file copy, move, delete in a transaction. It's an i...3D World Studio Content Pipeline for Windows Phone 7: This is a port of PhotonicGames' project: http://xna3dws.codeplex.com/releases/view/42994 for the Windows Phone 7 tools (XNA 4.0 CTP).Advanced Script Editor for 3D Rad: Advanced Script Editor makes it easier for 3D Rad coders to write scripts. Developed in C#, it features a functions list, a favourites list, object...Ajax ASP.Net Forum: A fast & lightweight open source free forum developed in ASP.Net 3.5, AJAX, CSS, SQL & Javascript Cache (filter-sort-move through table records at ...Axon: Axon is the home automation system that I will be running in my home. It will be a collection of different technologies and projects, often experim...BigBallz: Projeto de site de Bolões para campeonatos diversos. A princípio pensado para copa do mundo de futebol de 2010BigfootMVC: MVC Framework for DotNetNukeBigfootSQL: A StringBuilder for SQL. BigfootSQL was built with simplicity in mind. It assumes that you are comfortable writing SQL but dislike effort required ...Bxf (Basic XAML Framework): Basic Xaml Framework (Bxf) is a simple, streamlined set of UI components designed to demonstrate the minimum framework functionality required to ma...elZerf - elektronische Zeiterfassung: elektronisches Zeiterfassungsystem im Rahmen der Seminararbeit im Modul Web-Anwendungsprogrammierung.IntoFactories.Net - Samples: Project to host samples created by members of the IntoFactories.NET Team blog.Lanchonete: Sistema para controle de lanchonetes. Medieval Dynasties: Medieval Dynasties is a game written in C# 3.5 and XNA 3.1 at the moment. It is inspired by Crusader Kings, Total War and Civilization.PMMsg: A project to replace the standard messaging client on the Windows Mobile platform. Mainly geared towards Windows Mobile 6.5.3 VGA devices. Also an...PunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform a...Renegade Legion Fighter Calculator: In working on assigning fighters to squadrons, flights, and groups for a campaign, I was struck by the sheer amount of calculations I had to make. ...Sharpotify - Spotify .Net Library: Sharpotify is a Spotify library in C#. It is based in Jotify and SharPot projects. It is not a libspotify wrapper, It is a full .Net Spotify protoc...Silverlight load on demand with MEF: With MEF, a Silverlight control can be split in several packages(xap files). Each package can contain one or more pages and it will download on dem...SOLID by example: Source code examples to undestood solid design principles. Most of them were taken from http://www.lostechies.com/SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: A version of RS.EXE that you can use with Forms Authentication in Native Mode. Use the following arguments to specify credentials (just like Basic ...Stripper: Stripper Remove Diacritics and other unwanted caracter to fabric a more standardized file naming.study: studyUncoverPIC: UncoverPIC is a Silverlight Game strongly inspired to the famous Arcade Game "GalsPanic" (see http://en.wikipedia.org/wiki/Gals_Panic ). It was dev...Unity3D Untitled MMO: Unity3D Untitled MMO FrameworkUnnamedShop: UnnamedShopXBStudio.asp.net.automation: A Unit Testing Automation library for asp.netXBStudio.Web: XBStudio Web ApplicationNew Releases3D World Studio Content Pipeline for Windows Phone 7: Initial Release (0.1): This is the first release of the project, with plenty of hackery and kludges to go around, but it mostly works! Let me know if you hit any bugs.Acies: Acies - Alpha Build 0.0.10: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...Advanced Script Editor for 3D Rad: Advanced Script Editor - Version 2.6: Despite various previous releases on the 3D Rad forum, this is the first release on CodePlex.Ajax ASP.Net Forum: First Release: First Release prior to CodePlex Publish (send to admins)So, it doesn't all finish VERSION: 0.1.2 FEATURES Main Home Where all the Forums (called ...Artist Follower for Microsoft Access: Artist Follower 0.5.1: Artists Follower changes: Just one form to manage artists and links!!!Artist Follower for Microsoft Access: Artists Follower 0.5.0: This is the first release of Artist Follower.ASP.NET MVC SiteMap provider - MvcSiteMapProvider: MvcSiteMapProvider 2.0.0 CTP1: This is a community technology preview of MvcSiteMapProvider version 2.0. It is not backwards compatible with older MvcSiteMapProvider versions. ...B&W Port Scanner: Black`n`White Port Scanner 4.0: Version 4 includes: - Improved vulnerability detection tools - Report Creation - Improved Stability - Much better port information database - Nume...BaseCalendar: BaseControls 1.1: BaseControls 1.1 contains the BaseCalendar ASP.NET control. Changes: Rendering TH by default inside THEAD. Added option (ShowMinNumWeeks) to r...BigfootSQL: BigfootSQL Source Code: BigfootSQL C# Version 01Commerce Server 2009 Orders using Pipelines in a Console Application: ConsoleApplication To PLace Orders: ConsoleApplication To PLace Orders with Commerce Server 2009 foundationCommunity Forums NNTP bridge: Community Forums NNTP Bridge V33: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V34: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.2.0: New minor release containing: Integration test component for runtime testing Refactored and cleaned solution files First unit testsExtend SmallBasic: Teaching Extensions v.020: Moved Tortoise.approve to ProgramWindow.TakeScreenShot()fleet It: v0.06 Alpha: v0.06 Alpha - Features Caching implemented for fleets Various Bug fixes Implemented Settings. Resolved logical issue with Getting fleets U...Frotz.NET: Frotz.NET B2: In addition to B1 changes: - Added ZTools to enable debugging view of zcode files - Added rudimentary scroll back buffer. B1 Changes: - Got Adapt...FsObserver: FsObserver 2.0: This is basically the same as FsObserver 1.0 but the "-help" documentation has been cleaned up somewhat and the code has been refactored so that it...GPdotNET - Genetic Programming Tool: GPdotNETv1.0: GPdotNET v.1.0 - more details on http.bhrnjica.wordpress.com/gpdotnetHERB.IQ: Alpha 0.1 Source code release 8: Alpha 0.1 Source code release 8imdb movie downloader: myImdb 0.9.3: myImdb 0.9.3imdb movie downloader: myImdb 0.9.4: myImdb 0.9.4jccc .NET smart framework: jccc .NET smart framework version 1.2010.06.07: jccc .NET smart framework version 1.2010.06.07 added oracle databases supportLongBar: LongBar 2.1 Build 313: - Fixed library and updates to work with updated live services - Options: You can disable shadow nowMDownloader: MDownloader-0.15.17.59623: Fixed FileFactory provider. Improvied postpone policies. Added network request limiter.MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1: Installer for MediaCoder.NET v1.0 beta1.1. It can now convert files with spaces in the path or filename. I have also created filter for the SaveFil...MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1 Source Code: Source Code for MediaCoder.NET v1.0 beta 1.1.mesoBoard: mesoBoard - 0.9.1 beta: Fixed file download permissions Released under the New BSD License.MPCLI: Alpha Release (0.1.0.0): This release has core functionality and is considered a potential candidate for a feature complete release of this library. However, suggestions fo...N2 CMS: 2.0: N2 is a lightweight CMS framework for ASP.NET. It helps professional developers build great web sites that anyone can update. Major Changes (1.5 -...NHTrace: NHTrace-47571: NHTrace-47571NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.125: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NSoup: NSoup 0.2: NSoup 0.2 corresponds to jsoup version 1.1.1. List of changes can be viewed here.Opalis Community Releases: Integration Pack for Data Manipulation: The Integration Pack for Data Manipulation enables you to perform a wider variety of data manipulation tasks as well as aggregate data into common ...Performance Analysis of Logs (PAL) Tool: PAL v2.0 Beta 1: Fixed Counter Sorting: Fixed a minor bug where duplicate counter expression paths were not being removed. Analysis Added: Added LogicalDisk Read/...RoTwee: RoTwee (12.0.0.0): Trial version. 17925 Make it possible to change window sizeSharpotify - Spotify .Net Library: Sharpotify.Library 1.0: Sharpotify Library: Stable release. You can connect with spotify, search, browse (tracks, albums, artists), get a music stream, create and edit you...Silverlight load on demand with MEF: mal.Web.Silverlight.MEF 1.0.0.0: mal.Web.Silverlight.MEF 1.0.0.0sMAPtool: sMAPtool v0.7e (without Maps): + Added: color value expansion bar for hmap (right click to select color scheme) + Added: more complex hmap editing, uses now 4 point bounding rect...SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: Initial release: Enjoy!Stripper: Stripper 0.1.1 (CLi): Stripper Remove Diacritics and other unwanted caracters to fabric a more standardized file naming. Especially French caracter and maybe other lang...Unity3D Untitled MMO: v1: versionUrzaGatherer: UrzaGatherer 2.01a: New version with some minors bugs corrected.VCC: Latest build, v2.1.30608.0: Automatic drop of latest buildVCC: Latest build, v2.1.30608.1: Automatic drop of latest buildWatermarker.NET: 0.1.3811: A newer version with some improvements. I release this as a .zip archive, because settings are added here, so there will be .exe and .config files.Yet Another GPS: Alfa Release: Alfa working releaseMost Popular ProjectsDozer Enterprise Library for .NETEmployee Management SystemWiiMote PhysicsVisualStudio 2010 JavaScript OutliningSpider CompilerConcurrent CacheOil Slick Live FeedsCSUFVGDC Summer JamWinGetSiteMap Utility for DNN Blog ModuleMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodejQuery Library for SharePoint Web ServicesRawrNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAndrew's XNA HelpersBlogEngine.NETStyleCopCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • Top tweets SOA Partner Community – October 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds http://wp.me/p10C8u-vA leonsmiers ?Cant wait to test it >> 't waiRT @OracleSOA: Case Management patterns, session coverage from #OOW #OracleBPM #ACM #BPM http://bit.ly/OdcZL6 Danilo Schmiedel Bye bye San Francisco. #oow was a great conference in a wonderful city! Thanks! @soacommunity pic.twitter.com/lcYSe9xC OPITZ CONSULTING ?The Journey towards #Oracle #BPM @OpenWorld 2012 - Slides by @t_winterberg & H. Normann: http://ow.ly/edkWE #oow demed Full house at the SOA Customer Advisory Board! #oow12 http://instagr.am/p/QX9B8eLMLS/ Danilo Schmiedel "@whitehorsesnl: Had some great talks with the BPM guys at the DEMOgrounds. It is one of the best things at #oow" -> I agree!! @soacommunity Mark Simpson ?Fusion Middleware Global Innovation Awards: nice to pick up a soa and bpm with our customer. #oow Mark Simpson ?RT @SOASimone: #oraclesoa #oow hands on lab fully booked pic.twitter.com/pwI94Ew7 <--quick, provision some more compute power on the cloud! Oracle SOA ?Join us for BPM and Analytics: Process Dashboards. BAM, and Intelligent OptimizationMoscone South - 308#OracleBPM #OOW Oracle SOA ?Real-time public safety demo! License plate recognition and processing in London via Oracle Event Processing. #oow pic.twitter.com/WufesDBq Marc ?Nice session on customer success stories on #SOA11g on with @SOASimone Pro and cons and architectural overview. #oow pic.twitter.com/bzuhsujm Lucas Jellema Full length Keynote on Middleware #oow : http://medianetwork.oracle.com/video/player/1873556035001 … #oow_amis OracleBlogs ?Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers? http://ow.ly/2stVQ0 OracleBlogs ?Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications http://ow.ly/2suhzf Ronald Luttikhuizen ?VENNSTER BLOG | Presentations at OpenWorld 2012 | http://blog.vennster.nl/2012/10/presentations-at-openworld-2012.html … Andrejus Baranovskis @dschmied @soacommunity next OOW for sure, and may be SOA community event ! @soacommunity Danilo Schmiedel ?@andrejusb Thanks Andrejus - I really enjoyed having a session with you at #oow. When is next time :-) ? @soacommunity Lionel Dubreuil ?@soacommunity #oow12 Today-1:15pm-Marriott Marquis Salon 7 Jump-starting Integration with Oracle Foundation Pack http://bit.ly/QKKJzF Ronald Luttikhuizen ?Impression from our fault handling session in OSB and SOA Suite from the audience @soacommunity @gschmutz #oow pic.twitter.com/WSg1Z89E Marc Nice session on Oracle Virtual Assembly for #SOA11g, @soacommunity Works with #exalogic but not required SOA Community ?Send your #soacommunity #oow pictures and blog posts @soacommunity or http://www.facebook.com/soacommunity Enjoy OOW ;-) Jon petter hjulstad Oracle BPM- Big leap forward in 11.1.1.7 ! Whitehorses ?Common BPM Use Cases from Oracle #bpm #oow pic.twitter.com/ofOv04EF Whitehorses ?Oracle BPM 11.1.1.7 top new features. Interesting #oow #oowbenelux pic.twitter.com/HY9QN5un SOA Community Industrialized SOA - topic of Business Technology Magazine http://wp.me/p10C8u-vi orclateamsoa ?A-Team Blog #ateam: The curious case of SOA Human tasks' automatic completion http://ow.ly/1mq6YU Simone Geib Look for this sign #oow #oraclesoa pic.twitter.com/MJsPV4PO Lucas Jellema My summary of Larry Ellison's keynote at #oow on the AMIS Blog: http://technology.amis.nl/2012/10/01/oow-2012-larry-ellisons-keynote-announcements-exa-cloud-database/ … #oow_amis gschmutz ?Join my #oow session "Five Cool Use Cases for the Spring Component" to see the power of Spring and SOA Suite combined! Moscone 310 - 3:15 PM Ronald Luttikhuizen Thanks to @soacommunity for great SOA/BPM dinner event yesterday night! #oow pic.twitter.com/v7x3i0DC OracleBlogs ?OSB, Service Callouts and OQL http://ow.ly/2sq6B2 OracleBlogs ?Cloud and On-Premises Applications Integration using Oracle Integration Adapters http://ow.ly/2sqiDy OracleBlogs ?Adapters, SOA Suite and More @Openworld 2012 http://ow.ly/2srdTg Eric Elzinga ?OSB, Service Callouts and OQL - Part 3, http://see.sc/JodzEx #oracleservicebus Donatas Valys interesting articles about soa industrialization to read #soa #industrialization http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html … gschmutz ?“@techsymp: 2012 Symposium Presentation Download Page Now Available! 75% of presentations published. http://www.servicetechsymposium.com ” find mine there.. Oracle BPM Customer Experience and BPM – From Efficiency to Engagement #bpm #oraclebpm #processmanagement #socialbpm http://pub.vitrue.com/Tahi SOA Community ?@soacommunity SOA Community Newsletter September 2012 http://wp.me/p10C8u-wa SOA Community again again again.... it is Oracle Open World 2012 http://wp.me/p10C8u-wk OracleBlogs ?SOA Proactive support http://ow.ly/2smrSJ demed ?@gschmutz on NoSQL at @techsymp http://lockerz.com/s/247601661 demed ?Just finished "#BigData and its impact on #SOA" talk @techsymp. Really enjoyed getting out of beaten path. #london #oep http://lockerz.com/s/247636974 OTNArchBeat ?Need help selling SOA to business stakeholders? Give them this free eBook. #soasuite http://pub.vitrue.com/hsQY SOA Community top Tweets SOA Partner Community &ndash; September 2012 http://wp.me/p10C8u-vc SOA Community Move Data into the grid for scalable, predictable response times http://wp.me/p10C8u-vv ServiceTechSymposium ?The September issue of the Service Technology Magazine is now published with six new items! Read them at http://www.servicetechmag.com Marc ?Reviewed @Packt_OracleFMW new book on SOA11g administration! Very good ! http://tinyurl.com/8pzd5ww SOA Community ?BPM Solution Catalogue&ndash;promote your process templates http://wp.me/p10C8u-vt OTNArchBeat ?BPM ADF Task forms: Checking whether the current user is in a BPM Swimlane | @ChrisKarlChan http://pub.vitrue.com/aPMG OTNArchBeat ?Cloud, automation drive new growth in SOA governance market | @JoeMcKendrick http://pub.vitrue.com/hNPv Simon Haslam ?Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Simon Haslam ?The #ukoug2012 agenda is "go, go, go!" (as Murray would say!) http://2012.ukoug.org/agendagrid Germán Gazzoni SOA Spezial II verfügbar – Industralized SOA: Die überarbeitete und ergänzte Neuauflage des SOA Spezial Sonderhe... http://bit.ly/PAWwN9 Oracle SOA ?Flip thru new interactive "Oracle SOA Suite eBook-In the Customers Words" #middleware #soa #oraclesoa http://pub.vitrue.com/NzFZ SOA Community Follow SOA Community on Facebook http://www.facebook.com/soacommunity #soacommunity #opn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • installed openstack using devstack install shell script but getting 500 error when i try opening dashboard

    - by Arvind
    I followed the instructions at http://devstack.org/guides/single-machine.html to install OpenStack. I first installed Ubuntu on my Windows 7 PC using the officially supported Windows installer for Ubuntu 12.04 LTS. And after that I followed the instructions at above page to install OpenStack. As per instructions, I should be able to access the dashboard aka Horizon, at http://192.168.1.4/ (thats the IP of the PC on which I installed Ubuntu-OpenStack). However I am getting a 500 error web page when I open that. How do I resolve this error? I want to set up a dev environment for OpenStack. For your ref, the whole error message is given now-- FilterError at / /usr/bin/env: node: No such file or directory Request Method: GET Request URL: http://192.168.1.4/ Django Version: 1.4.2 Exception Type: FilterError Exception Value: /usr/bin/env: node: No such file or directory Exception Location: /usr/local/lib/python2.7/dist-packages/compressor/filters/base.py in input, line 133 Python Executable: /usr/bin/python Python Version: 2.7.3 Python Path: ['/opt/stack/horizon/openstack_dashboard/wsgi/../..', '/opt/stack/python-keystoneclient', '/opt/stack/python-novaclient', '/opt/stack/python-openstackclient', '/opt/stack/keystone', '/opt/stack/glance', '/opt/stack/python-glanceclient/setuptools_git-0.4.2-py2.7.egg', '/opt/stack/python-glanceclient', '/opt/stack/nova', '/opt/stack/horizon', '/opt/stack/cinder', '/opt/stack/python-cinderclient', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol', '/opt/stack/horizon/openstack_dashboard'] Server time: Sat, 27 Oct 2012 08:43:29 +0000 Error during template rendering In template /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html, error at line 3 /usr/bin/env: node: No such file or directory 1 {% load compress %} 2 3 {% compress css %} 4 <link href='{{ STATIC_URL }}dashboard/less/horizon.less' type='text/less' media='screen' rel='stylesheet' /> 5 {% endcompress %} 6 7 <link rel="shortcut icon" href="{{ STATIC_URL }}dashboard/img/favicon.ico"/> 8 Also, the traceback is now given below-- Environment: Request Method: GET Request URL: http://192.168.1.4/ Django Version: 1.4.2 Python Version: 2.7.3 Installed Applications: ('openstack_dashboard', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', 'compressor', 'horizon', 'openstack_dashboard.dashboards.project', 'openstack_dashboard.dashboards.admin', 'openstack_dashboard.dashboards.settings', 'openstack_auth') Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'horizon.middleware.HorizonMiddleware', 'django.middleware.doc.XViewMiddleware', 'django.middleware.locale.LocaleMiddleware') Template error: In template /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html, error at line 3 /usr/bin/env: node: No such file or directory 1 : {% load compress %} 2 : 3 : {% compress css %} 4 : <link href='{{ STATIC_URL }}dashboard/less/horizon.less' type='text/less' media='screen' rel='stylesheet' /> 5 : {% endcompress %} 6 : 7 : <link rel="shortcut icon" href="{{ STATIC_URL }}dashboard/img/favicon.ico"/> 8 : Traceback: File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py" in inner_func 36. response = func(*args, **kwargs) File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py" in splash 38. return shortcuts.render(request, 'splash.html', {'form': form}) File "/usr/local/lib/python2.7/dist-packages/django/shortcuts/__init__.py" in render 44. return HttpResponse(loader.render_to_string(*args, **kwargs), File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py" in render_to_string 176. return t.render(context_instance) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 140. return self._render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 823. bit = self.render_node(node, context) File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py" in render_node 74. return node.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py" in render 155. return self.render_template(self.template, context) File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py" in render_template 137. output = template.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 140. return self._render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 823. bit = self.render_node(node, context) File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py" in render_node 74. return node.render(context) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render 147. return self.render_compressed(context, self.kind, self.mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render_compressed 107. rendered_output = self.render_output(compressor, mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render_output 119. return compressor.output(mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/css.py" in output 51. ret.append(subnode.output(*args, **kwargs)) File "/usr/local/lib/python2.7/dist-packages/compressor/css.py" in output 53. return super(CssCompressor, self).output(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in output 230. content = self.filter_input(forced) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in filter_input 192. for hunk in self.hunks(forced): File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in hunks 167. precompiled, value = self.precompile(value, **options) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in precompile 210. command=command, filename=filename).input(**kwargs) File "/usr/local/lib/python2.7/dist-packages/compressor/filters/base.py" in input 133. raise FilterError(err) Exception Type: FilterError at / Exception Value: /usr/bin/env: node: No such file or directory

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >