Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 260/1081 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Metro: Grouping Items in a ListView Control

    - by Stephen.Walther
    The purpose of this blog entry is to explain how you can group list items when displaying the items in a WinJS ListView control. In particular, you learn how to group a list of products by product category. Displaying a grouped list of items in a ListView control requires completing the following steps: Create a Grouped data source from a List data source Create a Grouped Header Template Declare the ListView control so it groups the list items Creating the Grouped Data Source Normally, you bind a ListView control to a WinJS.Binding.List object. If you want to render list items in groups, then you need to bind the ListView to a grouped data source instead. The following code – contained in a file named products.js — illustrates how you can create a standard WinJS.Binding.List object from a JavaScript array and then return a grouped data source from the WinJS.Binding.List object by calling its createGrouped() method: (function () { "use strict"; // Create List data source var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44, category: "Beverages" }, { name: "Oranges", price: 1.99, category: "Fruit" }, { name: "Wine", price: 8.55, category: "Beverages" }, { name: "Apples", price: 2.44, category: "Fruit" }, { name: "Steak", price: 1.99, category: "Other" }, { name: "Eggs", price: 2.44, category: "Other" }, { name: "Mushrooms", price: 1.99, category: "Other" }, { name: "Yogurt", price: 2.44, category: "Other" }, { name: "Soup", price: 1.99, category: "Other" }, { name: "Cereal", price: 2.44, category: "Other" }, { name: "Pepsi", price: 1.99, category: "Beverages" } ]); // Create grouped data source var groupedProducts = products.createGrouped( function (dataItem) { return dataItem.category; }, function (dataItem) { return { title: dataItem.category }; }, function (group1, group2) { return group1.charCodeAt(0) - group2.charCodeAt(0); } ); // Expose the grouped data source WinJS.Namespace.define("ListViewDemos", { products: groupedProducts }); })(); Notice that the createGrouped() method requires three functions as arguments: groupKey – This function associates each list item with a group. The function accepts a data item and returns a key which represents a group. In the code above, we return the value of the category property for each product. groupData – This function returns the data item displayed by the group header template. For example, in the code above, the function returns a title for the group which is displayed in the group header template. groupSorter – This function determines the order in which the groups are displayed. The code above displays the groups in alphabetical order: Beverages, Fruit, Other. Creating the Group Header Template Whenever you create a ListView control, you need to create an item template which you use to control how each list item is rendered. When grouping items in a ListView control, you also need to create a group header template. The group header template is used to render the header for each group of list items. Here’s the markup for both the item template and the group header template: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> You should declare the two templates in the same file as you declare the ListView control – for example, the default.html file. Declaring the ListView Control The final step is to declare the ListView control. Here’s the required markup: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> In the markup above, six properties of the ListView control are set when the control is declared. First the itemDataSource and itemTemplate are specified. Nothing new here. Next, the group data source and group header template are specified. Notice that the group data source is represented by the ListViewDemos.products.groups.dataSource property of the grouped data source. Finally, notice that the layout of the ListView is changed to Grid Layout. You are required to use Grid Layout (instead of the default List Layout) when displaying grouped items in a ListView. Here’s the entire contents of the default.html page: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; font-size: x-large; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> </body> </html> Notice that the default.html page includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The default.html page also contains the declarations of the item template, group header template, and ListView control. Summary The goal of this blog entry was to explain how you can group items in a ListView control. You learned how to create a grouped data source, a group header template, and declare a ListView so that it groups its list items.

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • top tweets WebLogic Partner Community – June 2013

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity. Please feel free to send us your news! Lucas Jellema ?Getting started with Java EE 7: The Tutorial http://docs.oracle.com/javaee/7/tutorial/doc/home.htm … Simon Haslam I'm looking forward to starting a "WLS on ODA" proof of concept - some ideas for testing: http://www.veriton.co.uk/roller/fmw/entry/virtualised_oda_proof_of_concept … Frank Munz ?It's not too late - I just submitted two presentations about #OracleWebLogic and #Coherence for the @DOAGeV conference in Nürnberg. Did you? Arun Gupta ?Tyrus 1.0 User Guide: https://tyrus.java.net/documentation/1.0/user-guide.html … #WebSocket #JavaEE7 #GlassFish Arun Gupta #JavaEE7 Launch Webinar Technical Breakout replays on Youtube: http://bit.ly/12uUicT JSON 1.0 , EJB .2, Batch 1.0 more coming! OracleBlogs ?FREE Virtual Developer Day: Java SE, Java EE, Java Emebedded on Jun 19th and 25th http://ow.ly/2xBkwV Markus Eisele #Oracle #JavaSE Critical Patch Update Pre-Release Announcement - June 2013 http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html … #security OracleSupport_WLS ?Simple Custom #JMX MBeans with #WebLogic 12c and #Spring http://pub.vitrue.com/3kEr Oracle Technet Building Java HTML5/WebSocket Applications with JSR 356 - 4pm - Grand Ballroom Salon A/B #qconnewyork WebLogic Community Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos http://wp.me/p1LMIb-BK Oracle Technet #Java EE 7: Moving Java Forward for the Enterprise | @java http://pub.vitrue.com/tHiM OTNArchBeat ?Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project | @AndrejusB http://pub.vitrue.com/lZPR WebLogic Community ?ExaLogic In Memory Applications & Whitepapers Building Large Scale E-Commerce Platforms & Rethink the Entire Application Lifecycle… WebLogic Community ?Coherence YouTube videos http://wp.me/p1LMIb-BG Arun Gupta ?WARNING: Next 2 days are going to be loaded with #JavaEE7 launch related tweets, and offline next week! JDeveloper & ADF Using Contextual Event in Oracle ADF http://dlvr.it/3Vpybr Oracle WebLogic Check out new blog on #hybrid_cloud & why choice is important http://bit.ly/1b1QGhL Andrejus Baranovskis Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project http://fb.me/1M9iWNmAw WebLogic Community WebLogic on Oracle Database Appliance by Frances Zhao http://wp.me/p1LMIb-BE OTNArchBeat ?New: A-Team Chronicles >> A great resource for technical content covering Oracle Fusion Middleware / Fusion Apps http://pub.vitrue.com/qbzS Oracle for Partners ?Take Java To The Edge: Java Virtual Developer Day – June 19 & June 25 http://bit.ly/19fGlSX Adam Bien ?Looking forward to tomorrow's #javaee7 + #angularjs #html5 marriage at #jpoint. See you there: http://www.jpoint.nl/meetingpoint/editie-2013#sessie-1 … shay shmeltzer ?There is a new patch for the #Oracle #ADF Mobile extension - use help->check for updates to get it. Frank Munz ?Not using @OracleWebLogic 12c yet? Australia does! Reviews from my @AUSOUG workshops in Brisbane, Adelaide and Perth. http://goo.gl/BfVc4 Arun Gupta ?WebSocket, Server-Sent Events, #JavaEE7 sessions accepted at #jaxlondon ... that's gonna be at least third trip to London this year! WebLogic Community SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark running WebLogic 12c http://wp.me/p1LMIb-BC WebLogic Community The Ultimate Java EE Event - 16 Power Workshops mit allen wichtigen Java-EE-Themen http://wp.me/p1LMIb-BY Oracle WebLogic ?@OracleWebLogic 7 Jun New Blog Post: Using try-with-resources with JDBC objects http://ow.ly/2xryb5 JDeveloper & ADF Switching Lists of Values http://dlvr.it/3PbCkw WebLogic Community ?YouTube channel Learning Oracle's ADF http://wp.me/p1LMIb-zA Markus Eisele [GER] RT @heisedc: #Java-Entwicklung in #Oracles Public #Cloud http://heise.de/-1866388/ftw OracleBlogs ?Coherence Incubator & Community Source Code & Release Documentation http://ow.ly/2x2fXK chriscmuir ?New blog post: Migrating ADF Mobile apps from 1.0 to 1.1 https://blogs.oracle.com/onesizedoesntfitall/entry/migrating_adf_mobile_apps_from … JDeveloper & ADF ?ADF JavaScript Partitioning for Performance http://dlvr.it/3Trw15 WebLogic Community WebLogic Server Security Workshop June 27th 2013 Germany http://wp.me/p1LMIb-C7 WebLogic Community Oracle Optimized Solution for WebLogic Server 12c http://wp.me/p1LMIb-BA WebLogic Community Virtualize and Run Your Forms Applications in the Cloud - Now On Demand http://wp.me/p1LMIb-By Lucas Jellema Innteresting presentation on various aspects of end user assistance in Fusion Applications (ADF based): http://www.slideshare.net/uobroin/ouag-ireland-final2012slideshare … Adam Bien ?Summer Of JavaEE Workshops And Gigs: Free Hacking night:11.06.2013, Utrecht JavaEE 7 Meets HTML 5 and AngularJ... http://bit.ly/11XRjt4 WebLogic Community ?Real World ADF Design & Architecture Principles Trainings Germany, Poland & Portugal http://wp.me/p1LMIb-Bw Oracle for Partners ?JAVA Virtual Developer Day – June 19 & June 25 - Watch educational content and engage with Oracle experts online https://oracle.6connex.com/portal/java2013/login/?langR=en_US&mcc=OPNNSL … Markus Eisele ?[blog] Java EE 7 is final. Thoughts, Insights and further Pointers. http://dlvr.it/3SrxnB #javaee7 WebLogic Community Oracle takes the top spot for market share in the Application Server Market Segment for 2012 http://wp.me/p1LMIb-Bu OTNArchBeat ?Oracle ACE Director @LucasJellema is "very pleasantly surprised" with the new ADF Academy. http://pub.vitrue.com/8fad chriscmuir ?Sell out crowd for our ADF architecture course in Munich #adfarch pic.twitter.com/zhNtQJ25JV Markus Eisele ?[blog] New German Article: Java 7 Update 21 Security Improvements http://dlvr.it/3Sc8V9 #java #heise #security Markus Eisele ?[blog] New German Article: Oracle Java Cloud Service http://dlvr.it/3Sc20V #java #heise #OracleCloud OracleSupport_WLS ?Troubleshooting and Tuning with #WebLogic - Developer Webcast now available on #Youtube http://pub.vitrue.com/GSOy Andrejus Baranovskis New ADF Academy - Impressive Concept for ADF eLearning http://fb.me/2kYSMKKR5 OracleSupport_WLS ?Removing a #weblogic domain properly http://pub.vitrue.com/ZndM WebLogic Community WebLogic Partner Community Newsletter May 2013 http://wp.me/p1LMIb-Bp Oracle WebLogic ?Blog: Troubleshooting tools Part 3- Heap Dumps #Oracle #WebLogic Read the series http://bit.ly/14CQSD2 Oracle WebLogic ?Blog: #WebLogic_Server on #Oracle_Database_Appliance- How to conjure a WebLogic cluster- http://bit.ly/11fciHA Oracle WebLogic ?Check out new cool features in Oracle Traffic Director- http://bit.ly/11fbz9h WebLogic Community Additional new material WebLogic Community April 2013 http://wp.me/p1LMIb-zM WebLogic Community New WebLogic references - we want yours http://wp.me/p1LMIb-zK OracleSupport_WLS ?#Weblogic Session Replication jsession ID and F5 http://pub.vitrue.com/dWZp OracleBlogs ?top tweets WebLogic Partner Community May 2013 http://ow.ly/2xc8M5 WebLogic Community Welcome to the Spring edition of Oracle Scene http://wp.me/p1LMIb-zE Andreas Koop ?[blog post] ADF: Static Values View Object does not show any values (solved) http://bit.ly/14RDZ8p OracleBlogs ?ADF Mobile - accessing the SQLite database http://ow.ly/2x85r0 OracleSupport_WLS Youtube channel- Troubleshooting and Tuning with #WebLogic.#JRockit #SOAP #JRF http://pub.vitrue.com/qMxu Arun Gupta Next Java Magazine is all about #JavaEE7...productivity, HTML5, WebSocket, Batch & more. Subscribe http://ow.ly/lkD5D (@Oraclejavamag) Oracle WebLogic How to configure a #WebLogic cluster on #Oracle_Database_Appliance? It’s easy, read how. http://bit.ly/11fciHA Oracle WebLogic ?Blog: How to use Heap Dumps to troubleshooting memory leaks- #Oracle #WebLogic_Server http://bit.ly/14CQSD2 OracleBlogs ?Over 100 Images To Be Added to NetBeans Platform Showcase http://ow.ly/2x7Fvp Lucas Jellema A new release of the ADF EMG Task Flow Tester is now available for both JDeveloper 11 R1 and R2. https://java.net/projects/adf-task-flow-tester/pages/GettingStarted … WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • The Birth of a Method - Where did OUM come from?

    - by user702549
    It seemed fitting to start this blog entry with the OUM vision statement. The vision for the Oracle® Unified Method (OUM) is to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product.  Well, it’s that time of year again; we just finished testing and packaging OUM 5.6.  It will be released for general availability to qualifying customers and partners this month.  Because of this, I’ve been reflecting back on how the birth of Oracle’s Unified method - OUM came about. As the Release Director of OUM, I’ve been honored to package every method release.  No, maybe you’d say it’s not so special.  Of course, anyone can use packaging software to create an .exe file.  But to me, it is pretty special, because so many people work together to make each release come about.  The rich content that results is what makes OUM’s history worth talking about.   To me, professionally speaking, working on OUM, well it’s been “a labor of love”.  My youngest child was just 8 years old when OUM was born, and she’s now in High School!  Watching her grow and change has been fascinating, if you ask her, she’s grown up hearing about OUM.  My son would often walk into my home office and ask “How is OUM today, Mom?”  I am one of many people that take care of OUM, and have watched the method “mature” over these last 6 years.  Maybe that makes me a "Method Mom" (someone in one of my classes last year actually said this outloud) but there are so many others who collaborate and care about OUM Development. I’ve thought about writing this blog entry for a long time just to reflect on how far the Method has come. Each release, as I prepare the OUM Contributors list, I see how many people’s experience and ideas it has taken to create this wealth of knowledge, process and task guidance as well as templates and examples.  If you’re wondering how many people, just go into OUM select the resources button on the top of most pages of the method, and on that resources page click the ABOUT link. So now back to my nostalgic moment as I finished release 5.6 packaging.  I reflected back, on all the things that happened that cause OUM to become not just a dream but to actually come to fruition.  Here are some key conditions that make it possible for each release of the method: A vision to have one method instead of many methods, thereby focusing on deeper, richer content People within Oracle’s consulting Organization  willing to contribute to OUM providing Subject Matter Experts who are willing to write down and share what they know. Oracle’s continued acquisition of software companies, the need to assimilate high quality existing materials from these companies The need to bring together people from very different backgrounds and provide a common language to support Oracle Product implementations that often involve multiple product families What came first, and then what was the strategy? Initially OUM 4.0 was based on Oracle’s J2EE Custom Development Method (JCDM), it was a good “backbone”  (work breakdown structure) it was Unified Process based, and had good content around UML as well as custom software development.  But it needed to be extended in order to achieve the OUM Vision. What happened after that was to take in the “best of the best”, the legacy and acquired methods were scheduled for assimilation into OUM, one release after another.  We incrementally built OUM.  We didn’t want to lose any of the expertise that was reflected in AIM (Oracle’s legacy Application Implementation Method), Compass (People Soft’s Application implementation method) and so many more. When was OUM born? OUM 4.1 published April 30, 2006.  This release allowed Oracles Advanced Technology groups to begin the very first implementations of Fusion Middleware.  In the early days of the Method we would prepare several releases a year.  Our iterative release development cycle began and continues to be refined with each Method release.  Now we typically see one major release each year. The OUM release development cycle is not unlike many Oracle Implementation projects in that we need to gather requirements, prioritize, prepare the content, test package and then go production.  Typically we develop an OUM release MoSCoW (must have, should have, could have, and won’t have) right after the prior release goes out.   These are the high level requirements.  We break the timeframe into increments, frequent checkpoints that help us assess the content and progress is measured through frequent checkpoints.  We work as a team to prioritize what should be done in each increment. Yes, the team provides the estimates for what can be done within a particular increment.  We sometimes have Method Development workshops (physically or virtually) to accelerate content development on a particular subject area, that is where the best content results. As the written content nears the final stages, it goes through edit and evaluation through peer reviews, and then moves into the release staging environment.  Then content freeze and testing of the method pack take place.  This iterative cycle is run using the OUM artifacts that make sense “fit for purpose”, project plans, MoSCoW lists, Test plans are just a few of the OUM work products we use on a Method Release project. In 2007 OUM 4.3, 4.4 and 4.5 were published.  With the release of 4.5 our Custom BI Method (Data Warehouse Method FastTrack) was assimilated into OUM.  These early releases helped us align Oracle’s Unified method with other industry standards Then in 2008 we made significant changes to the OUM “Backbone” to support Applications Implementation projects with that went to the OUM 5.0 release.  Now things started to get really interesting.  Next we had some major developments in the Envision focus area in the area of Enterprise Architecture.  We acquired some really great content from the former BEA, Liquid Enterprise Method (LEM) along with some SMEs who were willing to work at bringing this content into OUM.  The Service Oriented Architecture content in OUM is extensive and can help support the successful implementation of Fusion Middleware, as well as Fusion Applications. Of course we’ve developed a wealth of OUM training materials that work also helps to improve the method content.  It is one thing to write “how to”, and quite another to be able to teach people how to use the materials to improve the success of their projects.  I’ve learned so much by teaching people how to use OUM. What's next? So here toward the end of 2012, what’s in store in OUM 5.6, well, I’m sure you won’t be surprised the answer is Cloud Computing.   More details to come in the next couple of weeks!  The best part of being involved in the development of OUM is to see how many people have “adopted” OUM over these six years, Clients, Partners, and Oracle Consultants.  The content just gets better with each release.   I’d love to hear your comments on how OUM has evolved, and ideas for new content you’d like to see in the upcoming releases.

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • C#/.NET Little Wonders: Skip() and Take()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. I’ve covered many valuable methods from System.Linq class library before, so you already know it’s packed with extension-method goodness.  Today I’d like to cover two small families I’ve neglected to mention before: Skip() and Take().  While these methods seem so simple, they are an easy way to create sub-sequences for IEnumerable<T>, much the way GetRange() creates sub-lists for List<T>. Skip() and SkipWhile() The Skip() family of methods is used to ignore items in a sequence until either a certain number are passed, or until a certain condition becomes false.  This makes the methods great for starting a sequence at a point possibly other than the first item of the original sequence.   The Skip() family of methods contains the following methods (shown below in extension method syntax): Skip(int count) Ignores the specified number of items and returns a sequence starting at the item after the last skipped item (if any).  SkipWhile(Func<T, bool> predicate) Ignores items as long as the predicate returns true and returns a sequence starting with the first item to invalidate the predicate (if any).  SkipWhile(Func<T, int, bool> predicate) Same as above, but passes not only the item itself to the predicate, but also the index of the item.  For example: 1: var list = new[] { 3.14, 2.72, 42.0, 9.9, 13.0, 101.0 }; 2:  3: // sequence contains { 2.72, 42.0, 9.9, 13.0, 101.0 } 4: var afterSecond = list.Skip(1); 5: Console.WriteLine(string.Join(", ", afterSecond)); 6:  7: // sequence contains { 42.0, 9.9, 13.0, 101.0 } 8: var afterFirstDoubleDigit = list.SkipWhile(v => v < 10.0); 9: Console.WriteLine(string.Join(", ", afterFirstDoubleDigit)); Note that the SkipWhile() stops skipping at the first item that returns false and returns from there to the rest of the sequence, even if further items in that sequence also would satisfy the predicate (otherwise, you’d probably be using Where() instead, of course). If you do use the form of SkipWhile() which also passes an index into the predicate, then you should keep in mind that this is the index of the item in the sequence you are calling SkipWhile() from, not the index in the original collection.  That is, consider the following: 1: var list = new[] { 1.0, 1.1, 1.2, 2.2, 2.3, 2.4 }; 2:  3: // Get all items < 10, then 4: var whatAmI = list 5: .Skip(2) 6: .SkipWhile((i, x) => i > x); For this example the result above is 2.4, and not 1.2, 2.2, 2.3, 2.4 as some might expect.  The key is knowing what the index is that’s passed to the predicate in SkipWhile().  In the code above, because Skip(2) skips 1.0 and 1.1, the sequence passed to SkipWhile() begins at 1.2 and thus it considers the “index” of 1.2 to be 0 and not 2.  This same logic applies when using any of the extension methods that have an overload that allows you to pass an index into the delegate, such as SkipWhile(), TakeWhile(), Select(), Where(), etc.  It should also be noted, that it’s fine to Skip() more items than exist in the sequence (an empty sequence is the result), or even to Skip(0) which results in the full sequence.  So why would it ever be useful to return Skip(0) deliberately?  One reason might be to return a List<T> as an immutable sequence.  Consider this class: 1: public class MyClass 2: { 3: private List<int> _myList = new List<int>(); 4:  5: // works on surface, but one can cast back to List<int> and mutate the original... 6: public IEnumerable<int> OneWay 7: { 8: get { return _myList; } 9: } 10:  11: // works, but still has Add() etc which throw at runtime if accidentally called 12: public ReadOnlyCollection<int> AnotherWay 13: { 14: get { return new ReadOnlyCollection<int>(_myList); } 15: } 16:  17: // immutable, can't be cast back to List<int>, doesn't have methods that throw at runtime 18: public IEnumerable<int> YetAnotherWay 19: { 20: get { return _myList.Skip(0); } 21: } 22: } This code snippet shows three (among many) ways to return an internal sequence in varying levels of immutability.  Obviously if you just try to return as IEnumerable<T> without doing anything more, there’s always the danger the caller could cast back to List<T> and mutate your internal structure.  You could also return a ReadOnlyCollection<T>, but this still has the mutating methods, they just throw at runtime when called instead of giving compiler errors.  Finally, you can return the internal list as a sequence using Skip(0) which skips no items and just runs an iterator through the list.  The result is an iterator, which cannot be cast back to List<T>.  Of course, there’s many ways to do this (including just cloning the list, etc.) but the point is it illustrates a potential use of using an explicit Skip(0). Take() and TakeWhile() The Take() and TakeWhile() methods can be though of as somewhat of the inverse of Skip() and SkipWhile().  That is, while Skip() ignores the first X items and returns the rest, Take() returns a sequence of the first X items and ignores the rest.  Since they are somewhat of an inverse of each other, it makes sense that their calling signatures are identical (beyond the method name obviously): Take(int count) Returns a sequence containing up to the specified number of items. Anything after the count is ignored. TakeWhile(Func<T, bool> predicate) Returns a sequence containing items as long as the predicate returns true.  Anything from the point the predicate returns false and beyond is ignored. TakeWhile(Func<T, int, bool> predicate) Same as above, but passes not only the item itself to the predicate, but also the index of the item. So, for example, we could do the following: 1: var list = new[] { 1.0, 1.1, 1.2, 2.2, 2.3, 2.4 }; 2:  3: // sequence contains 1.0 and 1.1 4: var firstTwo = list.Take(2); 5:  6: // sequence contains 1.0, 1.1, 1.2 7: var underTwo = list.TakeWhile(i => i < 2.0); The same considerations for SkipWhile() with index apply to TakeWhile() with index, of course.  Using Skip() and Take() for sub-sequences A few weeks back, I talked about The List<T> Range Methods and showed how they could be used to get a sub-list of a List<T>.  This works well if you’re dealing with List<T>, or don’t mind converting to List<T>.  But if you have a simple IEnumerable<T> sequence and want to get a sub-sequence, you can also use Skip() and Take() to much the same effect: 1: var list = new List<double> { 1.0, 1.1, 1.2, 2.2, 2.3, 2.4 }; 2:  3: // results in List<T> containing { 1.2, 2.2, 2.3 } 4: var subList = list.GetRange(2, 3); 5:  6: // results in sequence containing { 1.2, 2.2, 2.3 } 7: var subSequence = list.Skip(2).Take(3); I say “much the same effect” because there are some differences.  First of all GetRange() will throw if the starting index or the count are greater than the number of items in the list, but Skip() and Take() do not.  Also GetRange() is a method off of List<T>, thus it can use direct indexing to get to the items much more efficiently, whereas Skip() and Take() operate on sequences and may actually have to walk through the items they skip to create the resulting sequence.  So each has their pros and cons.  My general rule of thumb is if I’m already working with a List<T> I’ll use GetRange(), but for any plain IEnumerable<T> sequence I’ll tend to prefer Skip() and Take() instead. Summary The Skip() and Take() families of LINQ extension methods are handy for producing sub-sequences from any IEnumerable<T> sequence.  Skip() will ignore the specified number of items and return the rest of the sequence, whereas Take() will return the specified number of items and ignore the rest of the sequence.  Similarly, the SkipWhile() and TakeWhile() methods can be used to skip or take items, respectively, until a given predicate returns false.    Technorati Tags: C#, CSharp, .NET, LINQ, IEnumerable<T>, Skip, Take, SkipWhile, TakeWhile

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • Key Promoter for NetBeans

    - by Geertjan
    Whenever a menu item or toolbar button is clicked, it would be handy if NetBeans were to tell you 'hey, did you know, you can actually do this via the following keyboard shortcut', if a keyboard shortcut exists for the invoked action. After all, ultimately, a lot of developers would like to do everything with the keyboard and a key promoter feature of this kind is a helpful tool in learning the keyboard shortcuts related to the menu items and toolbar buttons you're clicking with your mouse. Above, you see the balloon message that appears for each menu item and toolbar button that you click and, below, you can see a list of all the actions that have been logged in the Notifications window. That happens automatically when an action is invoked (assuming the plugin described in this blog entry is installed), showing the display name of the action, together with the keyboard shortcut, which is presented as a hyperlink which, when clicked, re-invokes the action (which might not always be relevant, especially for context-sensitive actions, though for others it is quite useful, e.g., reopen the New Project wizard). And here's all the code. Notice that I'm hooking into the 'uigestures' functionality, which was suggested by Tim Boudreau, and I have added my own handler, which was suggested by Jaroslav Tulach, which gets a specific parameter from each new log entry handled by the 'org.netbeans.ui.actions' logger, makes sure that the parameter actually is an action, and then gets the relevant info from the action, if the relevant info exists: @OnShowingpublic class Startable implements Runnable {    @Override    public void run() {        Logger logger = Logger.getLogger("org.netbeans.ui.actions");        logger.addHandler(new StreamHandler() {            @Override            public void publish(LogRecord record) {                Object[] parameters = record.getParameters();                if (parameters[2] instanceof Action) {                    Action a = (Action) parameters[2];                    JMenuItem menu = new JMenuItem();                    Mnemonics.setLocalizedText(                            menu,                             a.getValue(Action.NAME).toString());                    String name = menu.getText();                    if (a.getValue(Action.ACCELERATOR_KEY) != null) {                        String accelerator = a.getValue(Action.ACCELERATOR_KEY).toString();                        NotificationDisplayer.getDefault().notify(                                name,                                 new ImageIcon("/org/nb/kp/car.png"),                                 accelerator,                                 new ActionListener() {                            @Override                            public void actionPerformed(ActionEvent e) {                                a.actionPerformed(e);                            }                        });                    }                }            }        });    }} Indeed, inspired by the Key Promoter in IntelliJ IDEA. Interested in trying it out? If there's interest in it, I'll put it in the NetBeans Plugin Portal.

    Read the article

  • How can I use the Boost Graph Library to lay out verticies?

    - by Mike
    I'm trying to lay out vertices using the Boost Graph Library. However, I'm running into some compilation issues which I'm unsure about. Am I using the BGL in an improper manner? My code is: PositionVec position_vec(2); PositionMap position(position_vec.begin(), get(vertex_index, g)); int iterations = 100; double width = 100.0; double height = 100.0; minstd_rand gen; rectangle_topology<> topology(gen, 0, 0, 100, 100); fruchterman_reingold_force_directed_layout(g, position, topology); //Compile fails on this line The diagnostics produced by clang++(I've also tried GCC) are: In file included from test.cpp:2: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:3: error: no member named 'dimensions' in 'boost::simple_point<double>' BOOST_STATIC_ASSERT (Point::dimensions == 2); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from test.cpp:2: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:13: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/graph_traits.hpp:15: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/tuple/tuple.hpp:24: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/static_assert.hpp:118:49: note: instantiated from: sizeof(::boost::STATIC_ASSERTION_FAILURE< BOOST_STATIC_ASSERT_BOOL_CAST( B ) >)>\ ^ In file included from test.cpp:2: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:3: note: instantiated from: BOOST_STATIC_ASSERT (Point::dimensions == 2); ^ ~~~~~~~ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:31: note: instantiated from: BOOST_STATIC_ASSERT (Point::dimensions == 2); ~~~~~~~^ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:417:19: note: in instantiation of template class 'boost::grid_force_pairs<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &> >' requested here make_grid_force_pairs(topology, position, g)), ^ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:431:3: note: in instantiation of function template specialization 'boost::fruchterman_reingold_force_directed_layout<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS, boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, boost::no_property, boost::no_property, boost::listS>, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &>, boost::square_distance_attractive_force, boost::attractive_force_t, boost::no_property>' requested here fruchterman_reingold_force_directed_layout ^ test.cpp:48:3: note: in instantiation of function template specialization 'boost::fruchterman_reingold_force_directed_layout<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS, boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, boost::no_property, boost::no_property, boost::listS>, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &> >' requested here fruchterman_reingold_force_directed_layout(g, position, topology); ^ 1 error generated.

    Read the article

  • VFP Unit Matrix Multiply problem on the iPhone

    - by Ian Copland
    Hi. I'm trying to write a Matrix3x3 multiply using the Vector Floating Point on the iPhone, however i'm encountering some problems. This is my first attempt at writing any ARM assembly, so it could be a faily simple solution that i'm not seeing. I've currently got a small application running using a maths library that i've written. I'm investigating into the benifits using the Vector Floating Point Unit would provide so i've taken my matrix multiply and converted it to asm. Previously the application would run without a problem, however now my objects will all randomly disappear. This seems to be caused by the results from my matrix multiply becoming NAN at some point. Heres the code IMatrix3x3 operator*(IMatrix3x3 & _A, IMatrix3x3 & _B) { IMatrix3x3 C; //C++ code for the simulator #if TARGET_IPHONE_SIMULATOR == true C.A0 = _A.A0 * _B.A0 + _A.A1 * _B.B0 + _A.A2 * _B.C0; C.A1 = _A.A0 * _B.A1 + _A.A1 * _B.B1 + _A.A2 * _B.C1; C.A2 = _A.A0 * _B.A2 + _A.A1 * _B.B2 + _A.A2 * _B.C2; C.B0 = _A.B0 * _B.A0 + _A.B1 * _B.B0 + _A.B2 * _B.C0; C.B1 = _A.B0 * _B.A1 + _A.B1 * _B.B1 + _A.B2 * _B.C1; C.B2 = _A.B0 * _B.A2 + _A.B1 * _B.B2 + _A.B2 * _B.C2; C.C0 = _A.C0 * _B.A0 + _A.C1 * _B.B0 + _A.C2 * _B.C0; C.C1 = _A.C0 * _B.A1 + _A.C1 * _B.B1 + _A.C2 * _B.C1; C.C2 = _A.C0 * _B.A2 + _A.C1 * _B.B2 + _A.C2 * _B.C2; //VPU ARM asm for the device #else //create a pointer to the Matrices IMatrix3x3 * pA = &_A; IMatrix3x3 * pB = &_B; IMatrix3x3 * pC = &C; //asm code asm volatile( //turn on a vector depth of 3 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00020000 \n\t" "fmxr fpscr, r0 \n\t" //load matrix B into the vector bank "fldmias %1, {s8-s16} \n\t" //load the first row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.A0, C.A1 and C.A2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the second row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.B0, C.B1 and C.B2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the third row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.C0, C.C1 and C.C2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //set the vector depth back to 1 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00000000 \n\t" "fmxr fpscr, r0 \n\t" //pass the inputs and set the clobber list : "+r"(pA), "+r"(pB), "+r" (pC) : :"cc", "memory","s0", "s1", "s2", "s8", "s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19" ); #endif return C; } As far as i can see that makes sence. While debugging i've managed to notice that if i were to say _A = C prior to the return and after the ASM, _A will not necessarily be equal to C which has only increased my confusion. I had thought it was possibly due to the pointers I'm giving to the VFPU being incrimented by lines such as "fldmias %0!, {s0-s2} \n\t" however my understanding of asm is not good enough to properly understand the problem, nor to see an alternative approach to that line of code. Anyway, I was hoping someone with a greater understanding than me would be able to see a solution, and any help would be greatly appreciated, thank you :-)

    Read the article

  • Flex 4, chart, Error 1009

    - by Stephane
    Hello, I am new to flex development and I am stuck with this error TypeError: Error #1009: Cannot access a property or method of a null object reference. at mx.charts.series::LineSeries/findDataPoints()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\series\LineSeries.as:1460] at mx.charts.chartClasses::ChartBase/findDataPoints()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\chartClasses\ChartBase.as:2202] at mx.charts.chartClasses::ChartBase/mouseMoveHandler()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\chartClasses\ChartBase.as:4882] I try to create a chart where I can move the points with the mouse I notice that the error doesn't occur if I move the point very slowly I have try to use the debugger and pint some debug every where without success All the behaviours were ok until I had the modifyData Please let me know if you have some experience with this kind of error I will appreciate any help. Is it also possible to remove the error throwing because after that the error occur if I click the dismiss all error button then the component work great there is the simple code of the chart <fx:Declarations> </fx:Declarations> <fx:Script> <![CDATA[ import mx.charts.HitData; import mx.charts.events.ChartEvent; import mx.charts.events.ChartItemEvent; import mx.collections.ArrayCollection; [Bindable] private var selectItem:Object; [Bindable] private var chartMouseY:int; [Bindable] private var hitData:Boolean=false; [Bindable] private var profitPeriods:ArrayCollection = new ArrayCollection( [ { period: "Qtr1 2006", profit: 32 }, { period: "Qtr2 2006", profit: 47 }, { period: "Qtr3 2006", profit: 62 }, { period: "Qtr4 2006", profit: 35 }, { period: "Qtr1 2007", profit: 25 }, { period: "Qtr2 2007", profit: 55 } ]); public function chartMouseUp(e:MouseEvent):void{ if(hitData){ hitData = false; } } private function chartMouseMove(e:MouseEvent):void { if(hitData){ var p:Point = new Point(linechart.mouseX,linechart.mouseY); var d:Array = linechart.localToData(p); chartMouseY=d[1]; modifyData(); } } public function modifyData():void { var idx:int = profitPeriods.getItemIndex(selectItem); var item:Object = profitPeriods.getItemAt(idx); item.profit = chartMouseY; profitPeriods.setItemAt(item,idx); } public function chartMouseDown(e:MouseEvent):void{ if(!hitData){ var hda:Array = linechart.findDataPoints(e.currentTarget.mouseX, e.currentTarget.mouseY); if (hda[0]) { selectItem = hda[0].item; hitData = true; } } } ]]> </fx:Script> <s:layout> <s:HorizontalLayout horizontalAlign="center" verticalAlign="middle" /> </s:layout> <s:Panel title="LineChart Control" > <s:VGroup > <s:HGroup> <mx:LineChart id="linechart" color="0x323232" height="500" width="377" mouseDown="chartMouseDown(event)" mouseMove="chartMouseMove(event)" mouseUp="chartMouseUp(event)" showDataTips="true" dataProvider="{profitPeriods}" > <mx:horizontalAxis> <mx:CategoryAxis categoryField="period"/> </mx:horizontalAxis> <mx:series> <mx:LineSeries yField="profit" form="segment" displayName="Profit"/> </mx:series> </mx:LineChart> <mx:Legend dataProvider="{linechart}" color="0x323232"/> </s:HGroup> <mx:Form> <mx:TextArea id="DEBUG" height="200" width="300"/> </mx:Form> </s:VGroup> </s:Panel> UPDATE 30/2010 : the null object is _renderData.filteredCache from the chartline the code call before error is the default mouseMoveHandler of chartBase to chartline. Is it possible to remove it ? or provide a filteredCache

    Read the article

  • Token replacement

    - by ClarkeyBoy
    Hey, I currently have a system on my website whereby I can put something like "[cfe]" anywhere in the site and, when the page is rendered, it will replace it with the root to the customer front end (same for "[afe]" and admin front end - so in the admin front end I can put "[cfe]/Default.aspx" to link to the homepage on the customer front end. This is in place as I have a development version of the site, then a test and a live version too. All 3 may have different roots to each section (for example the way the website is set up, the root to the admin front end in test is "/test/Administration/", but in live and development it is just "/Administration/"). Which version it is depends on the URL - all my development sites are in a folder called "development", whereas test is in a folder called "test" and any live urls do not contain either of these. There are also 3 different databases - one for each. All 3, obviously, require a different connection string. I also have a string replacement function in place which can change, for example, "[Product:Cards]" to point to the Cards catalogue page. Problem is that for this I go through all the products and do a replacement on "[Product:" & Product.Name() & "]". However I would like to take this further. I would like to pick up these custom strings when the page is rendered so it picks up "[Product:Cards]" and then goes off to find product "Cards" and replaces the string with a link to the Cards page, rather than looping through all the products and doing a replace just on the off chance that there are any replacements to make. One use for this, which I may start using in the future if I can figure out how to do this, is like on Wikipedia where you put the title of the page you want to point to, then a divider (think its a pipe from memory) then the link text. I would like to apply this to the above situation. This way broken links can also be picked up, and reported to admin (a major advantage as they can then locate them and remove the link or add the product / page that it refers to). I would like to take this to the stage where content of entire pages can be rearranged (kinda like web parts, but not as advanced as that). I mean like so you can put [layout type="3columnImageTopRight" image="imageurl"]Content here[/layout]. This will display, as specified, an image in the top right (with padding at the left and bottom) and 3 columns - maybe with the image spanning one or two columns). The imageurl can be specified as another token: maybe like [Image:imagename.gif] or something. This replaces it with the root to the image folder and then the specified filename. I have not really looked into how I am going to split the content into 3 columns yet, but this would be something to look at for my dissertation and then implement after my project deadline at least. Does anyone have any ideas or pointers which could help me with this? Also if this is not strictly token replacement then please point me to what it is, so I can further develop this. Thanks in advance, Regards, Richard Clarke

    Read the article

  • Access violation when accessing a COM object from .Net

    - by Groo
    Dear Sirs, I am sorry if the post is too long, but I would be happy if someone would at least point read the bolded titles, and point me in the right direction. I am having this problem for couple of days, but was unable to found the answer on the net. These are the things I have found out so far. 1. "Access violation" exception crushes my managed application My C# WinForms app sometimes closes with an "Access violation" exception ("Attempted to read or write protected memory"), right in the moment when selecting a TabPage in a windows form TabControl. From the stack trace (try/catch around Application.Run) I can see that the exception happens at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg), called inside UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData). -- Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. -- Stack trace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager .System.Windows.Forms.UnsafeNativeMethods .IMsoComponentManager.FPushMessageLoop (Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext .RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext .RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at MyApp.Program.Main() 2. The faulting module seems to be a COM object (ChartFX Client Server 6.2) Using WinDbg (with SoS loaded), I caught it on the unmanaged side, inside ChartFX.ClientServer.Core.dll (that's a COM charting component we are using): (ca84.c98c): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=06e67c38 ecx=06e67c38 edx=000018c6 esi=06e7df30 edi=317a9e80 eip=31666110 esp=0015e040 ebp=0015e08c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 ChartFX_ClientServer_Core!Ordinal5507+0x97b7: 31666110 8a404d mov al,byte ptr [eax+4Dh] ds:0023:0000004d=?? [edit:] I also wasn't able to get the unmamanged stack details from WinDbg (it said "Stack unwind info not available"): 0:000 kP ChildEBP RetAddr WARNING: Stack unwind information not available. Following frames may be wrong. 0015e08c 3166288b ChartFX_ClientServer_Core!Ordinal5507+0x97b7 0015e394 3165a921 ChartFX_ClientServer_Core!Ordinal5507+0x5f32 0015e480 31678685 ChartFX_ClientServer_Core!Ordinal5496+0x26a 0015e568 3167bef4 ChartFX_ClientServer_Core!Ordinal5492+0x975 0015e668 316a356b ChartFX_ClientServer_Core!Ordinal5492+0x41e4 0015e77c 31709496 ChartFX_ClientServer_Core!Ordinal443+0x5745 0015e7d0 31707f70 ChartFX_ClientServer_Core!Ordinal2584+0x3cdc 0015e7f8 3170817d ChartFX_ClientServer_Core!Ordinal2584+0x27b6 0015e81c 3162fd76 ChartFX_ClientServer_Core!Ordinal2584+0x29c3 0015e86c 7719f8d2 ChartFX_ClientServer_Core!Ordinal899+0x6b6 0015e898 7719f794 USER32!GetMessageW+0x93 0015e910 771a06f6 USER32!GetWindowLongW+0x115 0015e940 771a069c USER32!CallWindowProcW+0x75 0015e960 747fcef4 USER32!CallWindowProcW+0x1b 0015e97c 747fd073 comctl32!Ordinal377+0x5c 0015e9e0 747fd027 comctl32!DefSubclassProc+0x92 0015ea04 747fd4e6 comctl32!DefSubclassProc+0x46 0015ea20 747fd073 comctl32!DefSubclassProc+0x505 0015ea84 747fd118 comctl32!DefSubclassProc+0x92 0015eae4 7719f8d2 comctl32!DefSubclassProc+0x137 3. Bug is not easy to reproduce (although it can be provoked usually in less than 5 min.) I have several Chart instances in several TabPages, and this usually happens while I am switching the tabs. I still don't know how to reproduce it, besides switching those tabs for several minutes before it happens, so I cannot use our source control to reliably find the build which didn't have this problem. I am accessing the charts through the managed AxChart wrapper class (derived from AxHost), which was created by VS designer automatically. 4. What should be my next step? If someone could point me to the next step I should do to find the actual cause, I would be very grateful. Experimenting (removing and returning code) does not do much good, because I don't know how to reproduce it, so it would take large amounts of time on each iteration just to convince myself that the bug is still there. I have found that people often suggest something like "switching compiler optimizations", but since the exception is not thrown deterministically, I don't want to simply rearrange some bytes and hope that it never returns. Thanks a lot in advance! Best regards, Groo

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • JFreeChart CategoryPlot overwrites categories

    - by Christine
    Hi, I am new to using JFreeChart and I'm sure there is a simple solution to my problem . . PROBLEM: I have a chart that shows multiple "events types" along the date X axis. The Y axis shows the "event category". My problem is that only the latest date of an event type is shown for each category. In the example below The chart shows data points for Event Type 1 at June 20th(Category 1) and at June 10th (Category 2). I had also added a data point for June 10th, Category 1 but the June 20th point erases it. I think I am misunderstanding how the CategoryPlot is working. Am I using the wrong type of chart? I thought a scatter chart would be the ticket but it only accepts numerical values. I need to have discrete string categories on my Y-axis. If anyone can point me in the right direction, you would really make my day. Thanks for reading! -Christine (The code below works as-is. It is as simple as I could make it) import java.awt.Dimension; import javax.swing.JPanel; import org.jfree.chart.ChartPanel; import org.jfree.chart.JFreeChart; import org.jfree.chart.axis.CategoryAxis; import org.jfree.chart.axis.DateAxis; import org.jfree.chart.plot.CategoryPlot; import org.jfree.chart.plot.PlotOrientation; import org.jfree.chart.renderer.category.LineAndShapeRenderer; import org.jfree.data.category.CategoryDataset; import org.jfree.data.category.DefaultCategoryDataset; import org.jfree.data.time.Day; import org.jfree.ui.ApplicationFrame; import org.jfree.ui.RefineryUtilities; public class EventFrequencyDemo1 extends ApplicationFrame { public EventFrequencyDemo1(String s) { super(s); CategoryDataset categorydataset = createDataset(); JFreeChart jfreechart = createChart(categorydataset); ChartPanel chartpanel = new ChartPanel(jfreechart); chartpanel.setPreferredSize(new Dimension(500, 270)); setContentPane(chartpanel); } private static JFreeChart createChart(CategoryDataset categorydataset) { CategoryPlot categoryplot = new CategoryPlot(categorydataset, new CategoryAxis("Category"), new DateAxis("Date"), new LineAndShapeRenderer(false, true)); categoryplot.setOrientation(PlotOrientation.HORIZONTAL); categoryplot.setDomainGridlinesVisible(true); return new JFreeChart(categoryplot); } private static CategoryDataset createDataset() { DefaultCategoryDataset defaultcategorydataset = new DefaultCategoryDataset(); Day june10 = new Day(10, 6, 2002); Day june20 = new Day(20, 6, 2002); // This event is overwritten by June20th defaultcategorydataset.setValue(new Long(june10.getMiddleMillisecond()), "Event Type 1", "Category 1"); defaultcategorydataset.setValue(new Long(june10.getMiddleMillisecond()), "Event Type 1", "Category 2"); // Overwrites the previous June10th event defaultcategorydataset.setValue(new Long(june20.getMiddleMillisecond()), "Event Type 1", "Category 1"); defaultcategorydataset.setValue(new Long(june20.getMiddleMillisecond()), "Event Type 2", "Category 2"); return defaultcategorydataset; } public static JPanel createDemoPanel() { JFreeChart jfreechart = createChart(createDataset()); return new ChartPanel(jfreechart); } public static void main(String args[]) { EventFrequencyDemo1 eventfrequencydemo1 = new EventFrequencyDemo1("Event Frequency Demo"); eventfrequencydemo1.pack(); RefineryUtilities.centerFrameOnScreen(eventfrequencydemo1); eventfrequencydemo1.setVisible(true); } }

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >