Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 145/796 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Clean Ubuntu 14.04 install taking longer to boot after kernel upgrade

    - by mvburgos
    Hello this is my first post. I own an Asus Chromebox (2gb ram, 16 gb sdd, 3.0 usb ports) and fresh installed Ubuntu 14.04 x64. I decided to upgrade kernel because I was experiencing some freezing issues with XBMC, So first i installed stable kernel 3.14.04 from mainline and boot time was fine (it booted very fast I would say about 5 seconds) then after a while I installed kernels 3.15 and 3.14.06 BUT this time I also removed OLD STOCK KERNELS (dunno if that has something to do) after that the problem began, boot time is taking way longer, I would say about 30 seconds. Finally today I installed kernel 3.14.07, set grub to boot with that by default but same thing :( I post boot.log and dmesg, hope it is correct for this matter, if you need any other log just let me know. boot.log dmesg

    Read the article

  • Gnome 3.10 doesn't show login, Ubuntu 13.10

    - by TheWebs
    I use gnome as default desktop. And I recently upgrade Ubuntu to 13.10 then upgraded gnome to 3.10. Upon restarting, it brings me to the gnome log in screen (GDM) and all I see is the top banner part so like your date, music volume, option to restart, suspend or shutdown. Hitting control+alt+delete tells me gnome display manager will log out - it does but still nothing. I am on the log in screen, but there is no user to select, like there normally is or was. I can boot into safe mode and drop down to shell but I am not sure what commands I would enter to get the login screen back. I am using the gnome PPA's and the "unstable" PPA, so I expected issues, but not this - ideas?

    Read the article

  • OOP vs PP for algorithms

    - by Ilian
    Which paradigm is better for design and analysis of algorithms? Which is faster? Because I have a subject called Design and Analysis of Algorithms in university and have a time limit for programs. Is OOP slower than Procedure programming? Or the time difference is not big?

    Read the article

  • First Party and Third Party Cookie

    - by ajithperuva
    I want to create a analysis project (just like google analysis),for getting conversion rate and track visitor count.How can we create first party cookie and third party cookie using php.Actually, how can we identify our third party and first party cookie.Need to follow any type of standard for identify them?if anybody know please give me some idea about it...please

    Read the article

  • Motion detection in compressed domain (JPEG/Mpeg4/H264)

    - by paft
    everyone! I process video from IP cameras and have wrote a motion detection algorithm based on decompressed video analysis. But i really something more fast. I've found several papers about compressed domain analysis but have failed to find any implementations. Can anyone recommend me some code? found materials: http://www.ist-live.org/intranet/school-of-informatics-university-of-bradford001-7/41410206.pdf/view http://doc.rero.ch/lm.php?url=1000,43,4,20061128120121-NA/Bracamonte_Javier_-_A_Low_Complexity_Change_Detection_Algorithm_20061128.pdf

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Dynamically loaded jQuery with GreaseMonkey inconsistent on pages (refreshing seems to fix it)... do

    - by uprightnetizen
    Hi, I want a custom page analysis footer on every site I visit... so I've used a method to attach JQuery to unsafeWindow. I then create a floating footer on the page. I want to be able to call commands in a menu, do some processing, then put the results in the footer. Unfortunately it sometimes works, sometimes it doesn't. At least two alerts should happen in the printOutput function. Sometimes it only fires one, then it (crashes?) without error? On other pages, both alerts fire and it finds the element, but it doesn't add the extra text. (e.g. www.linode.com) Refreshing the page, then running the printOutput command again seems to always work. Does anyone know what's going on??? The userscript can be installed at: http://www.captionwizard.com/test/page_analysis.user.js // ==UserScript== // @name page_analysis // @namespace markspace // @description Page Analysis // @include http://*/* // ==/UserScript== (function() { // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://code.jquery.com/jquery-1.4.2.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var jqueryActive = false; //Check if jQuery's loaded function GM_wait() { if(typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = unsafeWindow.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { jqueryActive = true; setupOutputFooter(); } /******************************* Analysis FOOTER Functions ******************************/ function printOutput(someText) { alert('printing output'); if($('div.analysis_footer').length) { alert('is here - appending'); $('div.analysis_footer').append('<br>' + someText); } else { alert('not here - trying again'); setupOutputFooter(); $('div.analysis_footer').append('<br>' + someText); } } GM_registerMenuCommand("Test Output", testOutput, "k", "control", "k" ); function testOutput() { printOutput('testing this'); } function setupOutputFooter() { $('<div class="analysis_footer">Page Analysis Footer:</div>').appendTo('body'); $('div.analysis_footer').css('position','fixed').css('bottom', '0px').css('background-color','#F8F8F8'); $('div.analysis_footer').css('width','100%').css('color','#3B3B3B').css('font-size', '0.8em'); $('div.analysis_footer').css('font-family', '"Myriad",Verdana,Arial,Helvetica,sans-serif').css('padding', '5px'); $('div.analysis_footer').css('border-top', '1px solid black').css('text-align', 'left'); } }());

    Read the article

  • What does the symbol ::: mean in R

    - by Milktrader
    I came across this in the following context from B. Pfaff's "Analysis of Integrated and Cointegrated Time Series in R" ## Impulse response analysis of SVAR A-type model 1 args (vars ::: irf.svarest) 2 irf.svara <- irf (svar.A, impulse = ”y1 ” , 3 response = ”y2 ” , boot = FALSE) 4 args (vars ::: plot.varirf) 5 plot (irf.svara)

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • MSMQ messages using HTTP just won't get delivered

    - by John Breakwell
    I'm starting off the blog with a discussion of an unusual problem that has hit a couple of my customers this month. It's not a problem you'd expect to bump into and the solution is potentially painful. Scenario You want to make use of the HTTP protocol to send MSMQ messages from one machine to another. You have installed HTTP support for MSMQ and have addressed your messages correctly but they will not leave the outgoing queue. There is no configuration for HTTP support - setup has already done all that for you (although you may want to check the most recent "Installation of the MSMQ HTTP Support Subcomponent" section of MSMQINST.LOG to see if anything DID go wrong) - so you can't tweak anything. Restarting services and servers makes no difference - the messages just will not get delivered. The problem is documented and resolved by Knowledgebase article 916699 "The message may not be delivered when you use the HTTP protocol to send a message to a server that is running Message Queuing 3.0". It is unlikely that you would be able to resolve the problem without the assistance of PSS because there are no messages that can be seen to assist you and only access to the source code exposes the root cause. As this communication is over HTTP, the IIS logs would be a good place to start. POST entries are logged which show that connectivity is working and message delivery is being attempted: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2006-09-12 12:11:29 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2006-09-12 12:12:12 W3SVC1 10.1.17.219 POST /msmq/private$/test - 80 - 10.2.200.3 - 200 0 0 If you capture the traffic with Network Monitor you can see the POST being sent to the server but you also see a response being returned to the client: HTTP: Response to Client; HTTP/1.1; Status Code = 500 - Internal Server Error "Internal Server Error" means we can probably stop looking at IIS and instead focus on the Message Queuing ISAPI extension (Mqise.dll). MSMQ 3.0 (Windows XP and Windows Server 2003) comes with error logging enabled by default but the log files are in binary format - MSMQ 2.0 generated logging in plain text. The symbolic information needed for formatting the files is not currently publicly available so log files have to be sent in to Microsoft PSS.  Although this does mean raising a support case, formatting the log files to text and returning them to the customer shouldn't take long. Obviously the engineer analyses them for you - I just want to point out that you can see the logging output in text format if you want it. The important entries in the log for this problem are: [7]b48.928 09/12/2006-13:20:44.552 [mqise GetNetBiosNameFromIPAddr] ERROR:Failed to get the NetBios name from the DNS name, error = 0xea [7]b48.928 09/12/2006-13:20:44.552 [mqise RPCToServer] ERROR:RPC call R_ProcessHTTPRequest failed, error code = 1702(RPC_S_INVALID_BINDING) which allow a Microsoft escalation engineer to check the MQISE source code to see what is going wrong. This problem according to the article occurs when the extension tries to bind to the local MSMQ service after the extension receives a POST request that contains an MSMQ message. MSMQ resolves the server name by using the DNS host name but the extension cannot bind to the service because the buffer that MSMQ uses to resolve the server name is too small - server names that are exactly 15 characters long will not fit. RPC exception 0x6a6 (RPC_S_INVALID_BINDING) occurs in the W3wp.exe process but the exception is handled and so you do not receive an error message. The workaround is to rename the MSMQ server to something less than 15 characters. If the problem has only just been noticed in a production environment - an application may have been modified to get through a newly-implemented firewall, for example - then renaming is going to be an issue. Other applications may need to be reinstalled or modified if server names are hard-coded or stored in the registry. The renaming may also break a company naming convention where the name is built up from something like location+department+number. If you want to learn more about MSMQ logging then check out Chapter 15 of the MSMQ FAQ. In fact, even if you DON'T want to learn anything about MSMQ logging you should read the FAQ anyway as there is a huge amount of useful information on known issues and the like.

    Read the article

  • Add Social Elements to Your Gmail Contacts with Rapportive

    - by Matthew Guay
    Would you like to discover more about your contacts?  Xobni is a great tool for this in Outlook, and thanks to a small plugin for Gmail, you can get similar functionality right from your favorite webmail app. Setup Rapportive on Your Gmail Browse to the Rapportive site (link below), and click install to add it to your browser.  Rapportive currently only supports Firefox and Google Chrome.  In this test, we installed it on Google Chrome.  Notice that Chrome warns Rapportive may access your private data from Gmail, though Rapportive says that they only use this data securely on your computer or their servers. Next time you log into Gmail, open a message to see the new Rapportive sidebar.  Click Log in to get started. Choose if you want to let Rapportive to access your data. Finally, choose whether to stay logged into Rapportive or to log out when you log out of Gmail.   Using Rapportive Now, when you open an email, you should see more information about your contact on the right side of the message where you usually see Google AdSense ads. You may see an avatar, short bio, and links to their social networks.  You can add notes about a contact also, which lets you use Rapportive as a CRM. You may see more information on some contacts.  Here we see a contact that shows recent Tweets and links to several social networks. Take Rapportive Further You can add more features to Rapportive with Raplets, which are small extensions that add more information or CRM functionality.  To add these, click the Rapportive button on the top of Gmail, and select Add Raplets to Rapportive. Find a Raplet you want, and click Add This. A popup will open to give you more information about the Raplet; click the Add button at the bottom if you still want it. And, if you’re wish to close Rapportive without logging out of Gmail, click the Rapportive link in Gmail and select Log out. Conclusion Whether you want to find out more about your contacts or keep track of notes about them, Rapportive is a great way to do this from Gmail.  With tools like this, Gmail gets a bit more powerful and feels more like a desktop application. If you would like this type of functionality in Outlook, check out our article on how to power up Outlook’s search and contacts with Xobni. Add Rapportive to Gmail Similar Articles Productive Geek Tips How to Import Gmail Contacts Into Outlook 2007Enhance Your Gmail Account in ChromeFigure out which Online accounts are selling your email to spammersAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFix for New Contact Group Button Not Displaying in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • 5.1 surround sound on Acer Aspire 5738ZG with Ubuntu 11.10

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Move a sphere along the swipe?

    - by gameOne
    I am trying to get a sphere curl based on the swipe. I know this has been asked many times, but still it's yearning to be answered. I have managed to add force on the direction of the swipe and it works near perfect. I also have all the swipe positions stored in a list. Now I would like to know how can the curl be achieved. I believe the the curve in the swipe can be calculated by the Vector dot product If theta is 0, then there is no need to add the swipe. If it is not, then add the curl. Maybe this condition is redundant if I managed to find how to curl the sphere along the swipe position The code that adds the force to sphere based on the swipe direction is as below: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SwipeControl : MonoBehaviour { //First establish some variables private Vector3 fp; //First finger position private Vector3 lp; //Last finger position private Vector3 ip; //some intermediate finger position private float dragDistance; //Distance needed for a swipe to register public float power; private Vector3 footballPos; private bool canShoot = true; private float factor = 40f; private List<Vector3> touchPositions = new List<Vector3>(); void Start(){ dragDistance = Screen.height*20/100; Physics.gravity = new Vector3(0, -20, 0); footballPos = transform.position; } // Update is called once per frame void Update() { //Examine the touch inputs foreach (Touch touch in Input.touches) { /*if (touch.phase == TouchPhase.Began) { fp = touch.position; lp = touch.position; }*/ if (touch.phase == TouchPhase.Moved) { touchPositions.Add(touch.position); } if (touch.phase == TouchPhase.Ended) { fp = touchPositions[0]; lp = touchPositions[touchPositions.Count-1]; ip = touchPositions[touchPositions.Count/2]; //First check if it's actually a drag if (Mathf.Abs(lp.x - fp.x) > dragDistance || Mathf.Abs(lp.y - fp.y) > dragDistance) { //It's a drag //Now check what direction the drag was //First check which axis if (Mathf.Abs(lp.x - fp.x) > Mathf.Abs(lp.y - fp.y)) { //If the horizontal movement is greater than the vertical movement... if ((lp.x>fp.x) && canShoot) //If the movement was to the right) { //Right move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("right "+(lp.x-fp.x));//MOVE RIGHT CODE HERE canShoot = false; //rigidbody.AddForce((new Vector3((lp.x-fp.x)/30,10,16))*power); StartCoroutine(ReturnBall()); } else { //Left move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("left "+(lp.x-fp.x));//MOVE LEFT CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } } else { //the vertical movement is greater than the horizontal movement if (lp.y>fp.y) //If the movement was up { //Up move float y = (lp.y-fp.y)/Screen.height*factor; float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,y,16))*power); Debug.Log("up "+(lp.x-fp.x));//MOVE UP CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } else { //Down move Debug.Log("down "+lp+" "+fp);//MOVE DOWN CODE HERE } } } else { //It's a tap Debug.Log("none");//TAP CODE HERE } } } } IEnumerator ReturnBall() { yield return new WaitForSeconds(5.0f); rigidbody.velocity = Vector3.zero; rigidbody.angularVelocity = Vector3.zero; transform.position = footballPos; canShoot =true; isKicked = false; } }

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >