Search Results

Search found 24403 results on 977 pages for 'matt case'.

Page 441/977 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • OBJECT_Name parameters and dbid

    - by steveh99999
    If you've been using SQL Server for a long time, you may have been used to using the OBJECT_NAME system function in the past - especially useful when converting table IDs into table names when querying sysobjects and sysindexes..... However, if you're an old-school DBA  - did you know since SQL 2005 service pack 2 it  accepts a  second parameter ? database_id.. For example, this can be used to summarize some useful information from sys.dm_exec_query_stats. When reviewing SQL Server performance - it can be useful to look at the most heavily used stored procedures rather than inefficient less frequently used procedures.  Here's a query to summarize performance data on the most-heavily used stored procedures across all databases on a server  :-SELECT TOP 20 DENSE_RANK() OVER (ORDER BY SUM(execution_count) DESC) AS rank, OBJECT_NAME(qt.objectid, qt.dbid) AS 'proc name', (CASE WHEN qt.dbid = 32767 THEN 'mssqlresource' ELSE DB_NAME(qt.dbid) END ) AS 'Database', OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid) AS 'schema', SUM(execution_count) AS 'TotalExecutions',SUM(total_worker_time) AS 'TotalCPUTimeMS', SUM(total_elapsed_time) AS 'TotalRunTimeMS', SUM(total_logical_reads) AS 'TotalLogicalReads',SUM(total_logical_writes) AS 'TotalLogicalWrites', MIN(creation_time) AS 'earliestPlan', MAX(last_execution_time) AS 'lastExecutionTime' FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE OBJECT_NAME(qt.objectid, qt.dbid) IS NOT NULL GROUP BY OBJECT_NAME(qt.objectid, qt.dbid),qt.dbid,OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid)      

    Read the article

  • Everybody's Heard About the Bird: OTN ArchBeat Top Tweets for June 2013

    - by Bob Rhubart
    Your clicks count! Here at the Top 10 most popular tweets for June 2013 from @OTN ARchBeat on Twitter. Oracle #SOA Suite 11g Developers Cookbook Published | Antony Reynolds Jun 28, 2013 at 12:25 PM Notes on Oracle #BPM PS6 Adaptive Case Management | Graeme Colman Jun 24, 2013 at 11:55 AM Calling #ADF BC Web Service from #BPM Process | @AndrejusB Jun 24, 2013 at 12:12 PM ZDNet's @JoeMcKendrick interviews #SOA guru and author Thomas Erl (@soaschool). Jun 25, 2013 at 08:33 AM Two Weeks and counting: OTN Architect Day: Cloud Computing - July 9 - Redwood Shores, CA. Registration is free. Jun 25, 2013 at 06:00 PM Changing #WebLogic Server Deployment Order using #MBeans | @ArtofBI Jun 24, 2013 at 12:07 PM Getting Started with #WebCenter Portal — Content Contribution Project — Part 2 | Husain Dalal #fusionmiddleware Jun 24, 2013 at 09:58 AM Your next boss may not be the CIO, or any other IT manager for that matter | ZDNet Jun 25, 2013 at 02:00 PM Single Sign-On with Security Assertion Markup Language between Oracle and SAP | Ronaldo Fernandes Jun 26, 2013 at 04:08 PM RT @oracletechnet: It's Not TV, It's OTN: Top 10 Videos on the OTN YouTube Channel Jun 27, 2013 at 09:06 AM Thought for the Day "At some point you have to decide whether you're going to be a politician or an engineer. You cannot be both. To be a politician is to champion perception over reality. To be an engineer is to make perception subservient to reality. They are opposites. You can't do both simultaneously. " — H. W. Kenton Source: softwarequotes.com

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Wacom Bamboo CTH460L issues in Ubuntu 10.04

    - by Robert Smith
    I recently bought a Wacom Bamboo Pen & Touch CTH460L. I installed doctormo PPA, however, the pen functionality didn't work and the touch was very glitchy (when I touched it, it immediately double clicked and began to drag elements in the screen). I tried to configure it using the wacom-utility package in the Synaptic Package Manager (version 1.21-1) but that didn't work either. Then I followed this post (#621, written by aaaalex), and after some problems trying to restart Ubuntu (graphics related problems), the pen works fine (it could be better, though) but the touch functionality doesn't work anymore. Currently I have installed xserver-xorg-input-wacom (1:0.10.11-0ubuntu7), wacom-dkms (0.8.10.2-1ubuntu1) and wacom-utility. The Wacom Utility only displays an "options" field under "Wacom BambooPT 2FG 4X5" but no other option to configure it. What is the correct way to get this tablet working on Ubuntu 10.04?. By the way, currently I can't start Ubuntu properly when the tablet is connected (in that case, Ubuntu start in low graphics mode). I need to connect it later.

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • What are the consequences of immutable classes with references to mutable classes?

    - by glenviewjeff
    I've recently begun adopting the best practice of designing my classes to be immutable per Effective Java [Bloch2008]. I have a series of interrelated questions about degrees of mutability and their consequences. I have run into situations where a (Java) class I implemented is only "internally immutable" because it uses references to other mutable classes. In this case, the class under development appears from the external environment to have state. Do any of the benefits (see below) of immutable classes hold true even by only "internally immutable" classes? Is there an accepted term for the aforementioned "internal mutability"? Wikipedia's immutable object page uses the unsourced term "deep immutability" to describe an object whose references are also immutable. Is the distinction between mutability and side-effect-ness/state important? Josh Bloch lists the following benefits of immutable classes: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues do not need a copy constructor do not need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value do not need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch) : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

    Read the article

  • Roadmap for Thinktecture IdentityServer

    - by Your DisplayName here!
    I got asked today if I could publish a roadmap for thinktecture IdentityServer (idrsv in short). Well – I got a lot of feedback after B1 and one of the biggest points here was the data access layer. So I made two changes: I moved to configuration database access code to EF 4.1 code first. That makes it much easier to change the underlying database. So it is now just a matter of changing the connection string to use real SQL Server instead of SQL Compact. Important when you plan to do scale out. I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. Then there are some other small changes: The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. I made certificate based endpoint identities for SSL endpoints optional. This caused some problems with configuration and versioning of existing clients. I hope I can release the RC in the next days. If there are no major issues, there will be RTM very soon!

    Read the article

  • Hiding the Flash Message After a Time Delay

    - by Madhan ayyasamy
    Hi Friends,The flash hash is a great way to provide feedback to your users.Here is a quick tip for hiding the flash message after a period of time if you don’t want to leave it lingering around.First, add this line to the head of your layout to ensure the prototype and script.aculo.us javascript libraries are loaded:Next, add the following to either your layout (recommended), your view templates or a partial depending on your needs. I usually add this to a partial and include the partial in my layouts. "flash", :id = flash_type % "text/javascript" do % setTimeout("new Effect.Fade('');", 10000); This will wrap the flash message in a div with class=‘flash’ and id=‘error’, ‘notice’ or ‘warn’ depending on the flash key specified.The value ‘10000’ is the time in milliseconds before the flash will disappear. In this case, 10 seconds.This function looks pretty good and little javascript stunts like this can help make your site feel more professional. It’s also worth bearing in mind though, not everybody can see well or read as quickly as others so this may not be suitable for every application.Update:As Mitchell has pointed out (see comments below), it may be better to set the flash_type as the div class rather than it’s id. If there is the possibility that you’ll be showing more than one flash message per page, setting the flash_type as the div id will result in your HTML/XHTML code becoming invalid because the unique intentifier will be used more than once per page.Here is a slightly more complex version of the method shown above that will hide all divs with class ‘flash’ after a time delay, achieving the same effect and also ensuring your code stays valid with more than one flash message! "flash #{flash_type}" % "text/javascript" do % setTimeout("$$('div.flash').each(function(flash){ flash.hide();})", 10000); In this example, the div id is not set at all. Instead, each flash div will have class “div” and also class of the type of flash message (“error”, “warning” etc.).Have a Great Day..:)

    Read the article

  • Why does TDD work?

    - by CesarGon
    Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.

    Read the article

  • Executing Components in an Entity Component System

    - by John
    Ok so I am just starting to grasp the whole ECS paradigm right now and I need clarification on a few things. For the record, I am trying to develop a game using C++ and OpenGL and I'm relatively new to game programming. First of all, lets say I have an Entity class which may have several components such as a MeshRenderer,Collider etc. From what I have read, I understand that each "system" carries out a specific task such as calculating physics and rendering and may use more that one component if needed. So for example, I would have a MeshRendererSystem act on all entities with a MeshRenderer component. Looking at Unity, I see that each Gameobject has, by default, got components such as a renderer, camera, collider and rigidbody etc. From what I understand, an entity should start out as an empty "container" and should be filled with components to create a certain type of game object. So what I dont understand is how the "system" works in an entity component system. http://docs.unity3d.com/ScriptReference/GameObject.html So I have a GameObject(The Entity) class like class GameObject { public: GameObject(std::string objectName); ~GameObject(void); Component AddComponent(std::string name); Component AddComponent(Component componentType); }; So if I had a GameObject to model a warship and I wanted to add a MeshRenderer component, I would do the following: warship->AddComponent(new MeshRenderer()); In the MeshRenderers constructor, should I call on the MeshRendererSystem and "subscribe" the warship object to this system? In that case, the MeshRendererSystem should probably be a Singleton("shudder"). From looking at unity's GameObject, if each object potentially has a renderer or any of the components in the default GameObject class, then Unity would iterate over all objects available. To me, this seems kind of unnecessary since some objects might not need to be rendered for example. How, in practice, should these systems be implemented?

    Read the article

  • In between multiple tabs, how to use Vimperator to open a tab after current one

    - by Mehrad
    If you're a Firefox user you know that if you have 10 tabs open and say you're in the tab #5. If you open a new tab which will become #11 in order and search something, after you finish and close this #11 tab Firefox will direct you the the last active tab you were in which is #5. Using Vimperator You won't get the similar effect. In the same scenario, while you're in tab #5 you can go t and search for something which will lead to opening a tab at #11. Now you press d and where you are now ? #10 and you need to ctrl+p all the way back to #5. So in this case I am looking for one of these solutions. to be able to open a tab straight after the current tab (which closing it will capital D will put me back to my active tab) a way to close a tab and being brought back to the last active tab that I worked on. Will appreciate it if somebody got a solution for this. thanks

    Read the article

  • Install Adobe AIR on Ubuntu/Linux

    Since quite some time Adobe Technologies released the Linux version of Adobe AIR to bring web applications and widgets to your desktop. Installing new applications on a Linux system is not always as easy as switching the computer on. The following instructions might be helpful to install Adobe AIR on any Linux system. First of all, get the latest installer of Adobe AIR from http://get.adobe.com/air/ - as of writing this article the file name is AdobeAIRInstaller.bin. Save the download in your preferred folder. Now, there are two ways to run the installer - visual style or console style. Visual Installation Launch your favorite or standard file manager like thunar or nautilus and browse to the folder where the AdobeAIRInstaller.bin has been saved. Right click on the file and choose 'Properties' in the context menu Set 'Execute' permissions and confirm modifications with OK Rename file into AdobeAIRInstaller Double click and follow the instructions Using the console Open a terminal like xterm Change into the directory where you stored the download Run this command:[code]chmod +x AdobeAIRInstaller.bin[/code] Now run this command:[code]sudo ./AdobeAIRInstaller.bin[/code] The normal installer will open, install it. From now whenever you download a .air file, just double click it and it will be installed. Troubleshooting In case that the installation does not start properly, try to install via console. This gives you more details about the reasons. Should you run into something like this: [code]AdobeAIRInstaller.bin: 1: Syntax error: "(" unexpected[/code] Double check the execute permission of the installer file and try again.

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Remote deploying wars to a liferay installation

    - by iftrue
    With vanilla tomcat, you can POST to URLs beneath SOMURL/manager/ with a proper manager user role defined. The liferay deployment of tomcat, however, is missing the manager and host-manager applications, and when I copy the directories from a vanilla Tomcat installation, I get the exception below: Exception: javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) root cause java.lang.SecurityException: Servlet of class org.apache.catalina.manager.HTMLManagerServlet is privileged and cannot be loaded by this web application org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) What's the proper way to remote deploy wars to a liferay instance? (Not portlets, in my case.)

    Read the article

  • How to maintain the log source host using logstash

    - by Ray Rodriguez
    I am following the steps in this blog to set up rsyslog + logstash + graylog2 and I can't figure out how to replace the @source_host attribute in logstash using the mutate - replace filter. In the exmaple the author replaces his @source_host with a string value but I'd like to use the actual value that is parsed from in this case a syslog. mutate { type => loc1 replace => ["@source_host", "loc1"] } mutate { type => loc2 replace => ["@source_host", "loc2"] } How do I actually maintain the original source host in my logs?

    Read the article

  • Can't run node.js script on server reboot

    - by webstyle
    I need to listen events on port 3240 and I'm using node.js for that purpose. I need to execute my script with forever tool. I also need to run forever on server reboot. When I run forever glh.js everything works: forever list says there is a running process. But when I'm trying to run forever on server reboot I can't get it working. I've created a file in /etc/init.d with the following content: #!/bin/bash /var/www/yan/data/gitlabhook/runglh.sh &>/var/www/yan/data/gitlabhook/runglh.log When I reboot the server, the output log is the following (the same as when I run it manually via console): info: Forever processing file: glh.js But in this case forever doesn't start a process. forever list outputs: info: No forever processes running

    Read the article

  • How to prevent 2D camera rotation if it would violate the bounds of the camera?

    - by Andrew Price
    I'm working on a Camera class and I have a rectangle field named Bounds that determines the bounds of the camera. I have it working for zooming and moving the camera so that the camera cannot exit its bounds. However, I'm a bit confused on how to do the same for rotation. Currently I allow rotating of the camera's Z-axis. However, if sufficiently zoomed out, upon rotating the camera, areas of the screen outside the camera's bounds can be shown. I'd like to deny the rotation assuming it meant that the newly rotated camera would expose areas outside the camera's bounds, but I'm not quite sure how. I'm still new to Matrix and Vector math and I'm not quite sure how to model if the newly rotated camera sees outside of its bounds, undo the rotation. Here's an image showing the problem: http://i.stack.imgur.com/NqprC.png The red is out of bounds and as a result, the camera should never be allowed to rotate itself like this. This seems like it would be a problem with all rotated values, but this is not the case when the camera is zoomed in enough. Here are the current member variables for the Camera class: private Vector2 _position = Vector2.Zero; private Vector2 _origin = Vector2.Zero; private Rectangle? _bounds = Rectangle.Empty; private float _rotation = 0.0f; private float _zoom = 1.0f; Is this possible to do? If so, could someone give me some guidance on how to accomplish this? Thanks. EDIT: I forgot to mention I am using a transformation matrix style camera that I input in to SpriteBatch.Begin. I am using the same transformation matrix from this tutorial.

    Read the article

  • Configuring Weblogic Server 10.3.6 from 32-bit mode to 64-bit mode

    - by Ekta Malik
    This post pertains to the configuration of Weblogic Server from 32-bit mode to 64-bit mode on Solaris OS. Just in case, you have WLS 10.3.6 running in 32-bit mode and the JDK being used is installed for 64-bit mode [On Solaris OS, JDK 64-bit installation comprises of installing 32-bit JDK followed by a patch for 64-bit JDK].  Verification of the mode being used One can verify the mode of Weblogic Server in the following ways Either check the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware. Look for the patterns - SUN_ARCH_DATA_MODEL and JAVA_USE_64BIT in the file. For 32-bit mode, the parameters would appear as shown belowSUN_ARCH_DATA_MODEL="32"JAVA_USE_64BIT=false Check the server console logs; which JDK is being used during start-up By checking which JDK is used by the running process of Weblogic Server Configuration Steps Take a backup of the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware Modify the commonEnv.sh script for the following parameters: The values should be 64 and true respectively for 64-bit modeSUN_ARCH_DATA_MODEL="64"JAVA_USE_64BIT=true  Restart the weblogic server. One can confirm that the JDK being used is 64-bit by looking at the Weblogic console logs during server start up or by looking at the running process.

    Read the article

  • BPM PS6 video showing process lifecycle in more detail (30min) by Mark Nelson

    - by JuergenKress
    If the five minute video I shared last week has whet your appetite for more, then this might be just what you are looking for! The same international team that has made that video - Andrew Dorman, Tanya Williams, Carlos Casares, Joakim Suarez and James Calise – have also created a thirty minute version that walks through in much more detail and shows you, from the perspective of various business stakeholders involved in process modeling, exactly how BPM PS6 supports the end to end process lifecycle. The video centres around a Retail Leasing use case, and follows how Joakim the Business Analyst, Pablo the Process Owner, and James the Process Analyst take the process from conception to runtime, solely through BPM Composer, without the need for IT or the use of JDeveloper. Joakim, the Business Analyst, models the process, designs the user interaction forms, and creates business rules, Pablo, the Process Owner, reviews the process documentation and tests the process using the new ‘Process Player’, James, the Process Analyst, analyses the process and identifies potential bottle necks using ‘Process Simulation’. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM PS6,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to avoid big and clumpsy UITableViewController on iOS?

    - by Johan Karlsson
    I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many UITableViewController implementations seems to be rather big. Most example I have seen lets the UITableViewController implement UITableViewDelegate and UITableViewDataSource. These implementations are a big reason why UITableViewControlleris getting big. One solution would be to create separate classes that implements UITableViewDelegate and UITableViewDataSource. Of course these classes would have to have a reference to the UITableViewController. Are there any drawbacks using this solution? In general I think you should delegate the functionality to other "Helper" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain to much functionality, nor the view. A believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is; How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case) There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Also an argument why this solution is awesome.

    Read the article

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (randomly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

  • Advantages of Singleton Class over Static Class?

    Point 1)Singleton We can get the object of singleton and then pass to other methods.Static Class We can not pass static class to other methods as we pass objectsPoint 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed.Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use.Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • Email client supporting multiple accounts

    - by TGP1994
    I've been using Microsoft Outlook for a very long time, although one thing that has bugged me is how multiple email accounts are handled. As far as I can tell, there isn't a set and straightforward way of managing multiple accounts in one instance of outlook. For example, when I create an email, saving it as a draft will by default dump it into the first personal folder that I have open, which in my current case, is not where I want it. I would like all trash, spam, drafts, contacts, etc. etc. to be handled on a PF by PF basis. Now to my question: Is there a way to accomplish the task of email account "segregation" in Outlook (2007 is my current version), or is there another client that handles this in a more organized fashion? Note: I don't use most of the features in outlook (I hardly even need special formatting for my messages), I generally just send and read mail, and get a few attachments, so leaving Outlook wouldn't be too much of a stretch for me.

    Read the article

  • Oracle is Proud Sponsor of Gartner Security and Risk Management Summit 2011

    - by Troy Kitch
    Oracle will have a very strong presence at this year’s Gartner Security and Risk Management Summit 2011 in Washington D.C., June 20-23. If you plan on being there, please be sure to stop by Oracle booth D and say “hi” to the Security Solution Experts. Please join us for the: Oracle Solution Provider Session Oracle Solution Showcase Receptions Oracle Face to Face Meetings We have some powerful database security demonstrations that we’re showing off. If you haven’t had an opportunity to check out the new Oracle Database Firewall, now’s your chance to learn why it’s the first line of defense in a database security defense in depth strategy. Additionally, Mark Morrison, director of intelligence community information assurance, and Pat Sack, VP of the Oracle national security group, will discuss U.S. government cross-domain secure information sharing. This case study session will explain how Oracle helped the U.S. government consolidate its mission-critical intelligence database infrastructure securely, and the underlying Oracle Database security solutions that can benefit any organization looking to increase business agility and drive down IT costs through database consolidation. Potomac Ballroom B Find out more about the event here. Twitter #GartnerSecurity to join the conversation.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >