Search Results

Search found 29702 results on 1189 pages for 'select insert'.

Page 464/1189 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Share Your Top 30 Visited Domains with Visitation Cloud for Firefox

    - by Asian Angel
    Curious about the domains that you visit most or perhaps you want a way to share that information on a social website? Now you can see and share the 30 most visited domains in your browser’s history with the Visitation Cloud extension. Accessing Visitation Cloud As soon as you install the extension you can get started using it. Depending on how your browser’s UI is set up there are three methods for accessing Visitation Cloud: a “Visitation Cloud Button” inserted at the end of your “Bookmarks Toolbar”, a menu listing in the “Tools Menu”, and a “Toolbar Button” (not shown here). Visitation Cloud in Action As soon as you activate Visitation Cloud a new window will appear with your top domains displayed in a cloud format. Keep in mind that this is more than just a static image…each listing is actually a clickable link. Clicking on any of the listings will open that domain in a new tab or window depending on your particular browser settings. If you feel that you have a great set of links and want to share it with your friends then that is easy to do. Right click anywhere within the Visitation Cloud Window and select “Save as…”. The “cloud image” can be saved in “.png, .jpg, or Scalable Vector Graphics (.svg)” format. For our example we chose the “.svg format”. Perhaps you love the set of links but not the layout…right click and select “Randomize” to change how the cloud looks. Here is our cloud after being “Randomized”. Things definitely got moved around… Accessing the Visitation Cloud Image in other Browsers Once you have your “cloud image” saved you can share it with friends or save it for your own future use in other browsers. Here is our “cloud image” open in Opera Browser with link opening in progress. The same “cloud image” open in Google Chrome. Very nice… Conclusion While this may not be something that everyone will use Visitation Cloud does make for a rather unique, interesting, & fun way to access and share your most visited domains. Links Download the Visitation Cloud extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Fix "Security Error: Domain Name Mismatch" Warning in FirefoxAdd Variety to Your Searches with Search CloudletRestore Your Missing/Deleted Smart Bookmarks Folder in Firefox 3Blocking Spam from International Senders in Windows Vista MailSee Where a Package is Installed on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Share High Res Photos using Divvyshot Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SQL SERVER – Introduction to PERCENTILE_DISC() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function PERCENTILE_DISC(). The book online gives following definition of this function: Computes a specific percentile for sorted values in an entire rowset or within distinct partitions of a rowset in Microsoft SQL Server 2012 Release Candidate 0 (RC 0). For a given percentile value P, PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest CUME_DIST value (with respect to the same sort specification) that is greater than or equal to P. If you are clear with understanding of the function – no need to read further. If you got lost here is the same in simple words – find value of the column which is equal or more than CUME_DIST. Before you continue reading this blog I strongly suggest you read about CUME_DIST function over here Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: You can see that I have used PERCENTILE_DISC(0.5) in query, which is similar to finding median but not exactly. PERCENTILE_DISC() function takes a percentile as a passing parameters. It returns the value as answer which value is equal or great to the percentile value which is passed into the example. For example in above example we are passing 0.5 into the PERCENTILE_DISC() function. It will go through the resultset and identify which rows has values which are equal to or great than 0.5. In first example it found two rows which are equal to 0.5 and the value of ProductID of that row is the answer of PERCENTILE_DISC(). In some third windowed resultset there is only single row with the CUME_DIST() value as 1 and that is for sure higher than 0.5 making it as a answer. To make sure that we are clear with this example properly. Here is one more example where I am passing 0.6 as a percentile. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.6) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The result of the PERCENTILE_DISC(0.6) is ProductID of which CUME_DIST() is more than 0.6. This means for SalesOrderID 43670 has row with CUME_DIST() 0.75 is the qualified row, resulting answer 773 for ProductID. I hope this explanation makes it further clear. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Friday Fun: Snow Crusher

    - by Asian Angel
    It has probably been a long week whether you have already returned to work or are finishing up the last of your vacation time. If you are in need of some stress relief, then we have we the perfect game for you. This week you get to be totally fiendish and use a monster size snowball to destroy as many cars as possible at the local snow lodge. Snow Crusher The object of the game is simple…create as large of a monster snowball as you can and then send it down over the side of the mountain to destroy the cars at the snow lodge. You can choose from three different sizes of monster snowballs to create. We chose the “Snowflake Size” for our reign of destruction. Once you have chosen a monster snowball size, all that is left to do is select the control method that works best for you. As soon as you select the control method, your monster snowball creation will automatically begin. Keep in mind that the faster your snowball goes the harder it can become to steer if you make sudden movements… At the top you can watch your progress towards the drop-off point and the green boxes highlighted at the bottom indicate how large of an item (such as trees or boulders) your snowball can roll over and add to the total mass. Snowball speed is shown in the lower right corner. Time to roll! As soon as the first green box is lit up you can start adding small trees to your snowball’s mass. You will want to avoid larger items as you go because they will penalize your score, slow you down, and reduce the size of your snowball! Halfway to the drop-off point and our snowball is now able to grab up larger trees. If you have not hit any large items along the way, your snowball will definitely be moving along at a good rate by now. When you reach the end of the mass building area, your snowball will pop out into the open and get ready to drop off over the side of the mountain. Go snowball go! Yes! Thirteen cars crushed and ready for the scrap yard… If the “Snowflake Size” snowball can do this, just think what the “Avalanche Size” can do with three minutes of time to build up mass! Have fun with those monster snowballs! Play Snow Crusher Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper Enjoy Christmas Beyond the Holiday with Christmas Eve Crisis Parrotfish Extends the Number of Services Accessible in Twitter Previews

    Read the article

  • SQL SERVER – Detecting guest User Permissions – guest User Access Status

    - by pinaldave
    Earlier I wrote the blog post SQL SERVER – Disable Guest Account – Serious Security Issue, and I got many comments asking questions related to the guest user. Here are the comments of Manoj: 1) How do we know if the uest user is enabled or disabled? 2) What is the default for guest user in SQL Server? Default settings for guest user When SQL Server is installed by default, the guest user is disabled for security reasons. If the guest user is not properly configured, it can create a major security issue. You can read more about this here. Identify guest user status There are multiple ways to identify guest user status: Using SQL Server Management Studio (SSMS) You can expand the database node >> Security >> Users. If you see the RED arrow pointing downward, it means that the guest user is disabled. Using sys.sysusers Here is a simple script. If you notice column dbaccess as 1, it means that the guest user is enabled and has access to the database. SELECT name, hasdbaccess FROM sys.sysusers WHERE name = 'guest' Using sys.database_principals and sys.server_permissions This script is valid in SQL Server 2005 and a later version. This is my default method recently. SELECT name, permission_name, state_desc FROM sys.database_principals dp INNER JOIN sys.server_permissions sp ON dp.principal_id = sp.grantee_principal_id WHERE name = 'guest' AND permission_name = 'CONNECT' Using sp_helprotect Just run the following stored procedure which will give you all the permissions associated with the user. sp_helprotect @username = 'guest' Disable Guest Account REVOKE CONNECT FROM guest Additionally, the guest account cannot be disabled in master and tempdb; it is always enabled. There is a special need for this. Let me ask a question back at you: In which scenario do you think this will be useful to keep the guest, and what will the additional configuration go along with the scenario? Note: Special mention to Imran Mohammed for being always there when users need help. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client]

    - by Asian Angel
    Do you have a Windows computer that needs to be synced with the Ubuntu systems connected to your Ubuntu One account? Not a problem. Just grab a copy of the Ubuntu One Desktop Client and in just a few minutes your Windows system will be feeling the Ubuntu love. Once you get the desktop client installed you will see a new System Tray Icon waiting for you. Access the Context Menu and select Add this computer to start the syncing process. Enter your account details into the login window that appears and click Connect to Ubuntu One. Go back to the System Tray Icon, access the Context Menu, and select Synchronize Now. You can monitor the progress as small desktop notification messages keep you updated during the synchronizing process. The newly synchronized files will be placed in an Ubuntu One Folder under Documents/My Documents. Here is a quick peek at the Preferences Window. The only odd thing (bug) that we noticed with the whole setup was “Disconnected” being displayed even though our system was freshly synchronized and logged in. Note: Works on Windows XP (with SP3 & Windows Installer 4.5), Vista, and Windows 7. You will need to have the .NET 4 Framework installed (links for both installer types provided below). Need to access your Ubuntu One account directly through your browser? Then see our article on Accessing and Managing Your Ubuntu One Account in Chrome and Iron. Links Download the Ubuntu One Desktop Client [Ubuntu One Wiki] *Click on the (https://one.ubuntu.com/windows/beta) link to start the download. Microsoft .NET Framework 4 (Standalone Installer) [Microsoft] Microsoft .NET Framework 4 (Web Installer) [Microsoft] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • Bajaj Electical Lites up India with Oracle SCM Apps!

    - by [email protected]
    The Executive Team from Bajaj Electricals/India did a fantastic job of summarizing their Oracle Experience in a 5 minute clip below. Their Chairman, Mr Baja, along with Ram/GM and several other business leaders highlighted the decision to select Oracle and Oracle consulting to connect thier business applications to meet the needs of a growing country.  Its a clip worth listening to and provides keen insight into both the vision and results. http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=8226860

    Read the article

  • Sesame update du jour: SL 4, OOB, Azure, and proxy support

    - by Fabrice Marguerie
    I've just published a new version of Sesame Data Browser. Here's what's new this time: Upgraded to Silverlight 4 Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment. Support for Windows Azure authentication Support for SQL Azure authentication If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on "If you are behind a proxy that requires authentication, please click here" An icon and a button for closing connections are now displayed on connection tabsSome less visible improvements Here is the connection view with anonymous access: If you want to access Windows Azure tables as OData, all you have to do is use your table storage endpoint as the URL, and provide your access key: A Windows Azure table storage address looks like this: http://<your account>.table.core.windows.net/ If you want to browse your SQL Azure databases with Sesame, you have to enable OData support for them at https://www.sqlazurelabs.com/ConfigOData.aspx. I won't show how it works because it's already been done in several places over the Web. Here are pointers: OData.org: Got SQL Azure? Then you've got OData OakLeaf Systems: Enabling and Using the OData Protocol with SQL Azure Patrick Verbruggen: Creating an OData feed for your Azure databases Shawn Wildermuth: SQL Azure's OData Support Jack Greenfield: How to Use OData for SQL Azure with AppFabric Access Control You can choose to enable anonymous access or not. When you don't enable anonymous access, you have to provide an Issuer name and a Secret key, and optionally an Security Token Service (STS) endpoint: Excerpt from Jack Greenfield's blog: To enable OData access to the currently selected database, check the box labeled "Enable OData". When OData access is enabled, database user mapping information is displayed at the bottom of the form.Use the drop down list labeled "Anonymous Access User" to select an anonymous access user. If an anonymous access user is selected, then all queries against the database presented without credentials will execute by impersonating that user. You can access the database as the anonymous user by clicking on the link provided at the bottom of the page. If no anonymous access user is selected, then the OData Service will not allow anonymous access to the database.Click the link labeled "Add User" to add a user for authenticated access. In the pop up panel, select the user from the drop down list. Leave the issuer name empty for simple authentication, or provide the name of a trusted Security Token Service (STS) for federated authentication. For example, to federate with another ACS based STS, provide the base URI for the STS endpoint displayed by the Windows Azure AppFabric Portal for the STS.Click the "OK" button to complete the configuration process and dismiss the pop up panel. When one or more authenticated access users are added, the OData Service will impersonate them when appropriate credentials are presented. You can designate as many authenticated access users as you like. The OData Service will decide which one to impersonate for each query by inspecting the credentials presented with the query.Next time I'll give an overview of how Sesame Data Browser is built.In the meantime, happy data browsing!

    Read the article

  • SQL SERVER – Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: What’s the most interesting thing here is that as we go from row 1 to row 10, the value of the FIRST_VALUE() remains the same but the value of the LAST_VALUE is increasing. The reason behind this is that as we progress in every line – considering that line and all the other lines before it, the last value will be of the row where we are currently looking at. To fully understand this statement, see the following figure: This may be useful in some cases; but not always. However, when we use the same thing with PARTITION BY, the same query starts showing the result which can be easily used in analytical algorithms and needs. Let us have fun through the following query: Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Let us understand how PARTITION BY windows the resultset. I have used PARTITION BY SalesOrderID in my query. This will create small windows of the resultset from the original resultset and will follow the logic or FIRST_VALUE and LAST_VALUE in this resultset. Well, this is just an introduction to these functions. In the future blog posts we will go deeper to discuss the usage of these two functions. By the way, these functions can be applied over VARCHAR fields as well and are not limited to the numeric field only. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Three Puzzling Questions – Need Your Answer

    - by pinaldave
    Last week I had asked three questions on my blog. I got very good response to the questions. I am planning to write summary post for each of three questions next week. Before I write summary post and give credit to all the valid answers. I was wondering if I can bring to notice of all of you this week. Why SELECT * throws an error but SELECT COUNT(*) does not This is indeed very interesting question as not quite many realize that this kind of behavior SQL Server demonstrates out of the box. Once you run both the code and read the explanation it totally makes sense why SQL Server is behaving how it is behaving. Also there is connect item is associated with it. Also read the very first comment by Rob Farley it also shares very interesting detail. Statistics are not Updated but are Created Once This puzzle has multiple right answer. I am glad to see many of the correct answer as a comment to this blog post. Statistics are very important and it really helps SQL Server Engine to come up with optimal execution plan. DBA quite often ignore statistics thinking it does not need to be updated, as they are automatically maintained if proper database setting is configured (auto update and auto create). Well, in this question, we have scenario even though auto create and auto update statistics are ON, statistics is not updated. There are multiple solutions but what will be your solution in this case? When to use Function and When to use Stored Procedure This question is rather open ended question – there is no right or wrong answer. Everybody developer has always used functions and stored procedures. Here is the chance to justify when to use Stored Procedure and when to use Functions. I want to acknowledge that they can be used interchangeably but there are few reasons when one should not do that. There are few reasons when one is better than other. Let us discuss this here. Your opinion matters. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • SQL SERVER – Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post of Prox ‘n’ Funx. This is a very interesting subject. By the way Brad Schulz is my favorite guy when it is about blogging. I respect him as well learn a lot from him. Everybody is writing something new his subject, I decided to start SQL Server 2012 analytic functions series. SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, CUME_DIST() OVER(ORDER BY SalesOrderID) AS CDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY CDist DESC GO Above query will give us following result. Now let us understand what is the formula behind CUME_DIST and why the values in SalesOrderID = 43670 are 1. Let us take more example and be clear about why the values in SalesOrderID = 43667 are 0.5. Now let us enhence the same example and use PARTITION BY into the OVER clause and see the results. Run following query in SQL Server 2012. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID DESC, CDist DESC GO Now let us see the result of this query. We are have changed the ORDER BY clause as well partitioning by SalesOrderID. You can see that CUME_DIST() function provides us different results. Additionally now we see value 1 multiple times. As we are using partitioning for each group of SalesOrderID we get the CUME_DIST() value. CUME_DIST() was long awaited Analytical function and I am glad to see it in SQL Server 2012. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ODI 12c - Installing ODI Studio

    - by David Allan
    Today the 12c release of the Oracle Data Integrator was made GA on OTN. Once you have downloaded and are running the installer, if you want to install the ODI Studio, ensure you select 'Enterprise Installation' as this is where the ODI Studio for 12.1.2 can be installed from. If you choose 'Standalone Installation' you will be hunting for the ODI studio software. So ensure you pick Enterprise Installation to get the ODI design studio. Once that's done you are ready to go!

    Read the article

  • Add Bookmarks and Notes to Delicious in IE 8

    - by Asian Angel
    Are you constantly adding bookmarks to your Delicious account while browsing but want to keep UI use to a minimum? Add bookmarks directly to your account from the context menu using the Share with Delicious accelerator. Share with Delicious in Action To add the accelerator click on Add to Internet Explorer and confirm the installation when the secondary window appears. This is going to be much better than having the Favorites Bar or a new toolbar taking up precious UI room. When you find a webpage that you would like to bookmark right click within the page, go to All Accelerators and select Share with Delicious. The form for the new bookmark will open in a new tab with the URL and title filled in. All that you need to do is add any desired notes/tags and save the bookmark. Suppose that you want notes from the page added to the bookmark. Highlight the desired text, right click on it, then go to All Accelerators and select Share with Delicious. As before the form will open in a new tab…you can see the highlighted text was entered into the notes section for the new bookmark. All that is left to do is add an appropriate tag and save. Once you save your new bookmark the tab will auto navigate to the webpage that you just saved. Returning to our account showed the new bookmark ready for future use along with a the notes for later reference. Conclusion If you add bookmarks to your Delicious account but want to save UI room, then the Share with Delicious accelerator will make a nice addition to Internet Explorer. Links Add the Share with Delicious accelerator to Internet Explorer 8 Similar Articles Productive Geek Tips Access and Manage Your Delicious Bookmarks the Easy WayQuickly Add Bookmarks to Delicious in FirefoxAutomate Adding Bookmarks to del.icio.usHow Many Times Has an Article Been Bookmarked on del.icio.us?Add Social Bookmarking (Digg This!) Links to your Wordpress Blog TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • How you can extend Tasklists in Fusion Applications

    - by Elie Wazen
    In this post we describe the process of modifying and extending a Tasklist available in the Regional Area of a Fusion Applications UI Shell. This is particularly useful to Customers who would like to expose Setup Tasks (generally available in the Fusion Setup Manager application) in the various functional pillars workareas. Oracle Composer, the tool used to implement such extensions allows changes to be made at runtime. The example provided in this document is for an Oracle Fusion Financials page. Let us examine the case of a customer role who requires access to both, a workarea and its associated functional tasks, and to an FSM (setup) task.  Both of these tasks represent ADF Taskflows but each is accessible from a different page.  We will show how an FSM task is added to a Functional tasklist and made accessible to a user from within a single workarea, eliminating the need to navigate between the FSM application and the Functional workarea where transactions are conducted. In general, tasks in Fusion Applications are grouped in two ways: Setup tasks are grouped in tasklists available to implementers in the Functional Setup Manager (FSM). These Tasks are accessed by implementation users and in general do not represent daily operational tasks that fit into a functional business process and were consequently included in the FSM application. For these tasks, the primary organizing principle is precedence between tasks. If task "Manage Suppliers" has prerequisites, those tasks must precede it in a tasklist. Task Lists are organized to efficiently implement an offering. Tasks frequently performed as part of business process flows are made available as links in the tasklist of their corresponding menu workarea. The primary organizing principle in the menu and task pane entries is to group tasks that are generally accessed together. Customizing a tasklist thus becomes required for business scenarios where a task packaged under FSM as a setup task, is for a particular customer a regular maintenance task that is accessed for record updates or creation as part of normal operational activities and where the frequency of this access merits the inclusion of that task in the related operational tasklist A user with the role of maintaining Journals in General Ledger is also responsible for maintaining Chart of Accounts Mappings.  In the Fusion Financials Product Family, Manage Journals is a task available from within the Journals Menu whereas Chart of Accounts Mapping is available via FSM under the Define Chart of Accounts tasklist Figure 1. The Manage Chart of Accounts Mapping Task in FSM Figure 2. The Manage Journals Task in the Task Pane of the Journals Workarea Our goal is to simplify cross task navigation and allow the user to access both tasks from a single tasklist on a single page without having to navigate to FSM for the Mapping task and to the Journals workarea for the Manage task. To accomplish that, we use Oracle Composer to customize  the Journals tasklist by adding to it the Mapping task. Identify the Taskflow name and path of the FSM Task The first step in our process is to identify the underlying taskflow for the Manage Chart of Accounts Mappings task. We select to Setup and Maintenance from the Navigator to launch the FSM Application, and we query the task from Manage Tasklists and Tasks Figure 3. Task Details including Taskflow path The Manage Chart of Accounts Mapping Task Taskflow is: /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow /CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow We copy that value and use it later as a parameter to our new task in the customized Journals Tasklist. Customize the Journals Page A user with Administration privileges can start the run time customization directly from the Administration Menu of the Global Area.  This customization is done at the Site level and once implemented becomes available to all users with access to the Journals Workarea. Figure 4.  Customization Menu The Oracle Composer Window is displayed in the same browser and the Hierarchy of the page component is displayed and available for modification. Figure 5.  Oracle Composer In the composer Window select the PanelFormLayout node and click on the Edit Button.  Note that the selected component is simultaneously highlighted in the lower pane in the browser. In the Properties popup window, select the Tasks List and Task Properties Tab, where the user finds the hierarchy of the Tasklist and is able to Edit nodes or create new ones. src="https://blogs.oracle.com/FunctionalArchitecture/resource/TL5.jpg" Figure 6.  The Tasklist in edit mode Add a Child Task to the Tasklist In the Edit Window the user will now create a child node at the desired level in the hierarchy by selecting the immediate parent node and clicking on the insert node button.  This process requires four values to be set as described in Table 1 below. Parameter Value How to Determine the Value Focus View Id /JournalEntryPage This is the Focus View ID of the UI Shell where the Tasklist we want to customize is.  A simple way to determine this value is to copy it from any of the Standard tasks on the Tasklist Label COA Mapping This is the Display name of the Task as it will appear in the Tasklist Task Type dynamicMain If the value is dynamicMain, the page contains a new link in the Regional Area. When you click the link, a new tab with the loaded task opens Taskflowid /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/ coaMappings/ui/flow/ CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow This is the Taskflow path we retrieved from the Task Definition in FSM earlier in the process Table 1.  Parameters and Values for the Task to be added to the customized Tasklist Figure 7.   The parameters window of the newly added Task   Access the FSM Task from the Journals Workarea Once the FSM task is added and its parameters defined, the user saves the record, closes the Composer making the new task immediately available to users with access to the Journals workarea (Refer to Figure 8 below). Figure 8.   The COA Mapping Task is now visible and can be invoked from the Journals Workarea   Additional Considerations If a Task Flow is part of a product that is deployed on the same app server as the Tasklist workarea then that task flow can be added to a customized tasklist in that workarea. Otherwise that task flow can be invoked from its parent product’s workarea tasklist by selecting that workarea from the Navigator menu. For Example The following Taskflows  belong respectively to the Subledger Accounting, and to the General Ledger Products.  /WEB-INF/oracle/apps/financials/subledgerAccounting/accountingMethodSetup/mappingSets/ui/flow/MappingSetFlow.xml#MappingSetFlow /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow/CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow Since both the Subledger Accounting and General Ledger products are part of the LedgerApp J2EE Applicaton and are both deployed on the General Ledger Cluster Server (Figure 8 below), the user can add both of the above taskflows to the  tasklist in the  /JournalEntryPage FocusVIewID Workarea. Note:  both FSM Taskflows and Functional Taskflows can be added to the Tasklists as described in this document Figure 8.   The Topology of the Fusion Financials Product Family. Note that SubLedger Accounting and General Ledger are both deployed on the Ledger App Conclusion In this document we have shown how an administrative user can edit the Tasklist in the Regional Area of a Fusion Apps page using Oracle Composer. This is useful for cases where tasks packaged in different workareas are frequently accessed by the same user. By making these tasks available from the same page, we minimize the number of steps in the navigation the user has to do to perform their transactions and queries in Fusion Apps.  The example explained above showed that tasks classified as Setup tasks, meaning made accessible to implementation users from the FSM module can be added to the workarea of their respective Fusion application. This eliminates the need to navigate to FSM to access tasks that are both setup and regular maintenance tasks. References Oracle Fusion Applications Extensibility Guide 11g Release 1 (11.1.1.5) Part Number E16691-02 (Section 3.2) Oracle Fusion Applications Developer's Guide 11g Release 1 (11.1.4) Part Number E15524-05

    Read the article

  • CodePlex Daily Summary for Monday, December 10, 2012

    CodePlex Daily Summary for Monday, December 10, 2012Popular ReleasesMOBZcript: MOBZcript 0.9.1: First fix - typo in mobzystems.com-url!Media Companion: MediaCompanion3.511b release: Two more bug fixes: - General Preferences were not getting restored - Fanart and poster image files were being locked, preventing changes to themVodigi Open Source Interactive Digital Signage: Vodigi Release 5.5: The following enhancements and fixes are included in Vodigi 5.5. Vodigi Administrator - Manage Music Files - Add Music Files to Image Slide Shows - Manage System Messages - Display System Messages to Users During Login - Ported to Visual Studio 2012 and MVC 4 - Added New Vodigi Administrator User Guide Vodigi Player - Improved Login/Schedule Startup Procedure - Startup Using Last Known Schedule when Disconnected on Startup - Improved Check for Schedule Changes - Now Every 15 Minutes - Pla...Secretary Tool: Secretary Tool v1.1.0: I'm still considering this version a beta version because, while it seems to work well for me, I haven't received any feedback and I certainly don't want anyone relying solely on this tool for calculations and such until its correct functioning is verified by someone. This version includes several bug fixes, including a rather major one with Emergency Contact Information not saving. Also, reporting is completed. There may be some tweaking to the reporting engine, but it is good enough to rel...VidCoder: 1.4.10 Beta: Added progress percent to the title bar/task bar icon. Added MPLS information to Blu-ray titles. Fixed the following display issues in Windows 8: Uncentered text in textbox controls Disabled controls not having gray text making them hard to identify as disabled Drop-down menus having hard-to distinguish white on light-blue text Added more logging to proxy disconnect issues and increased timeout on initial call to help prevent timeouts. Fixed encoding window showing the built-in pre...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.400: Version 2.5.0.400 (Release): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Update the documentation. InfoMan: Write the documentation. Other Downloads Downloads OverviewHome Access Plus+: v8.5: v8.5.1208.1500 This is a point release, for the other parts of HAP+ see the v8.3 release. Fixed: #me#me issue with the Edit Profile Link Updated: 8.5.1208 release Updated: Documentation with hidden booking system feature Added: Room Drop Down to the Booking System (no control panel interface), can be Resource Specific Fixed: Recursive AD Group Membership Lookup Fixed: User.IsInRole with recursive lookup Added: Group Searches to hidefrom and showto for the booking system Added: DFS Targeting ...Ynote Classic: Ynote Classic version 1.0: Ynote Classic is a text editor made by SS Corporation. It can help you write code by providing you with different codes for creation of html or batch files. You can also create C/C++ /Java files with SS Ynote Classic. Author of Ynote Classic is Samarjeet Singh. Ynote Classic is available with different themes and skins. It can also compile *.bat files into an executable file. It also has a calculator built within it. 1st version released of 6-12-12 by Samarjeet Singh. Please contact on http:...Http Explorer: httpExplorer-1.1: httpExplorer now has the ability to connect to http server via web proxies. The proxy may be explicitly specified by hostname or IP address. Or it may be specified via the Internet Options settings of Windows. You may also specify credentials to pass to the proxy if the proxy requires them. These credentials may be NTLM or basic authentication (clear text username and password).Bee OPOA Platform: Bee OPOA Demo V1.0.001: Initial version.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.78: Fix for issue #18924 - using -pretty option left in ///#DEBUG blocks. Fix for issue #18980 - bad += optimization caused bug in resulting code. Optimization has been removed pending further review.XNA GPU Particles Tool: Multi-Effect Release 20525: This version allows creating multiple effects with multiple emitters. Load, change and save particle effects. Display them in various ways to help design the effect. There are also some help pages to explain what the main settings do. This includes some preset emitter settings and a camera position for a First Person view. Source code only for XNA 4 and Visual Studio express 2010.Implementing Google Protocol Buffers using C#: Console Application: Compiled Version of the Application. The application is developed using MS.NET 4.0 run time.MB Tools: WDSMulticastMonitor V2.0: Updated version of WDSMulticast Monitor: Uses WdsTransportManger Comobject instead of WdsManager, this means that it works on servers that only have the transport role installed. Old version required the full WDS role to be installed to work. Some minor bugfixes and extra errorchecks added Added Transfererate to output, although this seems to work only sporadic. Its not a limitation of this script though, when not outputing anything in script niether is the normal wds tools.Yahoo! UI Library: YUI Compressor for .Net: Version 2.2.0.0 - Epee: New : Web Optimization package! Cleaned up the nuget packages BugFix: minifying lots of files will now be faster because of a recent regression in some code. (We were instantiating something far too many times).DtPad - .NET Framework text editor: DtPad 2.9.0.40: http://dtpad.diariotraduttore.com/files/images/flag-eng.png English + A new built-in editor for the management of CSV files, including the edit of cells, deleting and adding new rows, replacement of delimiter character and much more (issue #1137) + The limit of rows allowed before the decommissioning of their side panel has been raised (new default: 1.000) (issue #1155, only partially solved) + Pressing CTRL+TAB now DtPad opens a screen that shows the list of opened tabs (issue #1143) + Note...AvalonDock: AvalonDock 2.0.1746: Welcome to the new release of AvalonDock 2.0 This release contains a lot (lot) of bug fixes and some great improvements: Views Caching: Content of Documents and Anchorables is no more recreated everytime user move it. Autohide pane opens really fast now. Two new themes Expression (Dark and Light) and Metro (both of them still in experimental stage). If you already use AD 2.0 or plan to integrate it in your future projects, I'm interested in your ideas for new features: http://avalondock...AcDown?????: AcDown????? v4.3.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ?? v4.3.2?? ?????????????????? ??Acfun??????? ??Bilibili?????? ??Bilibili???????????? ??Bilibili????????? ??????????????? ???? ??Bilibili??????? ????32??64? Windows XP/...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.2: ??FineUI ?? ExtJS ??? ASP.NET 2.0 ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License 2.0 (Apache) ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ ???? +2012-12-03 v3.2.2 -?????????????,?????button/button_menu.aspx(????)。 +?Window????Plain??;?ToolbarPosition??Footer??;?????FooterBarAlign??。 -????win...Player Framework by Microsoft: Player Framework for Windows Phone 8: This is a brand new version of the Player Framework for Windows Phone, available exclusively for Windows Phone 8, and now based upon the Player Framework for Windows 8. While this new version is not backward compatible with Windows Phone 7 (get that http://smf.codeplex.com/releases/view/88970), it does offer the same great feature set plus dozens of new features such as advertising, localization support, and improved skinning. Click here for more information about what's new in the Windows P...New ProjectsConfigMgrUI: UI for System Center Configuration Manager 2007 & 2012, System Center Orchestrator 2012 and MDT. cvn demo poject: just a test projectDelicious Notify Plugin: Lets you push a blog post straight to Delicious from Live WriterInsert Acronym Tags: Lets you insert <acronym> and <abbr> tags into your blog entry more easily.Insert File(s): Allows you to include up to three files for publishing in a post using Windows Live Writer.Insert Quick Link: Allows you to paste a link into the Writer window and have the a window similar to the one in Writer where you can change what text is to appear, open in new window, etc.ISD training tasks: ISD training examples and tasksjack1209132501: tesingjack12091327: tesingKata: Just a couple of solutions to katas found over around the web.KJFramework????(Mono Version): KJFramework????(Mono Version)Liber: The Dynamic library manager Managing Ebook and derivatives under different Tags & Properties Share Libers under PUBLIC TAGS in the LIBER Community... Libro Visual Basic 2012 spiegato a mia nonna: Tutti gli esempi del libro "Visual Basic 2012 spiegato a mia nonna". Work in progress...log4net SharePoint 2013 Appender: log4net SharePoint 2013 AppenderMethodSerializer: .NET Method Serializer DeserializerMGR.VSSolutionManager: MGR.VSSolutionManager is a tool to help to manage (update, compare, merge) Visual Studio's solution files.MIAGE IHM: BlablaPCV_Tutorial: PCV_TutorialRoll Manager: Roll Manager is a client/server application that makes it easy to create scripts with many actions that run against groups of servers. No more DOS batch scripts. It's developed in C# .NET 3.5.RootFinder: Comprehensive implementation of a 1D RootFinder framework solution - implements the full suite of bracketing and open methods, and their primitives/compoundSGF Editor: SGF Editor reads/writes .sgf files, edits Go game trees, etc. It has several useful commands for reviewing games. For searching purposes: goban, baduk, weiqi.SharePoint 365 Utilities: SharePoint 365 aims helping SharePoint developers improve the delivery time of SharePoint solutions by offering a set of utilities for DEV/Deploy/Maintenance.SNMP: ????windwos api ? WinSNMP API ??????Stock Analitator: Stock Analyzer using Relative Strength IndexTPV: TPVTrending Topic Component: This component is a final assignment of Component Based Programming in Duta Wacana Christian Univerity, Yogyakarta, IndonesiaVisual .NET RogueLike: A basic roguelike hacked out in Visual .NET. I'm unsure where I want to go with the project, so I'm just putting it out there for now.YooYoo: Personal DevelopingYou are here (for Windows Mobile): This sample shows you how to play a *.wav file on your device with Compact Framework 2.0. There is better support for playing music on Compact Framework 3.5. The idea of this sample is from this video: http://www.youtube.com/watch?v=xMcSNfrT-4MZoom.PE: PE read/write libraryZX AAA Workshop: ????????? ZX AAA Workshop ????????????? ??? ????????? ???????? ? ???? ?????????? ?????????? ZX Spectrum. ?????????????? ?????????: Specaculator, Unreal, Fuse. ??????????? ?????????? ????????? ?????? ?????????? ?????????????. ?????????? ?????? ? ??????? png, gif, bmp, tiff, jpeg.???????? ??????? ?????????? ? Visual Studio: ???????? ??????? ?????????? ? Visual Studio

    Read the article

  • Developing my own package manager: how do I resolve dependencies?

    - by shantanu
    According to debian control file ant package's dependency list is: Depends: default-jre-headless | java2-runtime-headless | java5-runtime-headless | java6-runtime-headless, libxerces2-java That means it depends on default-jre-headless or java2-runtime-headless or java5-runtime-headless or java6-runtime-headless. But how could i determine that which package need to select as dependency? Note: I am trying to develop a package manager. I can not use provided package because they read sources.list file from a specific directory.

    Read the article

  • OracleServiceBus+SOA in same server

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM .For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get SOAand OSB running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.                     Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema  After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.We can also bundle Enterprise Manager in the same installation or in a different server. Here in this case we will use the enterprise manager in the same domain. So we selected the Enterprise Manager component also. There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below   After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA.  Since we would like to have both the servers in one managed server (one JVM) we will have to do one  

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Programmatically Starting and Stopping FTP Sites in IIS 7 and IIS 8

    - by The Official Microsoft IIS Site
    I was recently contacted by someone who was trying to use Windows Management Instrumentation (WMI) code to stop and restart FTP websites by using code that he had written for IIS 6.0; his code was something similar to the following: Option Explicit On Error Resume Next Dim objWMIService, colItems, objItem ' Attach to the IIS service. Set objWMIService = GetObject( "winmgmts:\root\microsoftiisv2" ) ' Retrieve the collection of FTP sites. Set colItems = objWMIService.ExecQuery( "Select...(read more)

    Read the article

  • How To Run A Shell Script Again And Again Having X Interval Of Time?

    - by Muhammad Hassan
    I have a shell script in my Ubuntu Server 14.04 LTS at ./ShellScript.sh. I setup /etc/rc.local to run the shell script after boot but before login using below code. Run this: sudo nano /etc/rc.local then add following and save. #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. #!/bin/bash ./ShellScript.sh exit 0 Now I want to run/execute this shell script again and again having 15min of time interval between every run after boot but before login. So Can I do it? Update 1:) When I run crontab -e then I got the following. Now What to do? no crontab for root - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/ed 2. /bin/nano <---- easiest 3. /usr/bin/vim.basic 4. /usr/bin/vim.tiny Choose 1-4 [2]: After selecting 2, I got crontab: "/usr/bin/sensible-editor" exited with status 2 UPDATE 2:) Update ShellScript.sh like below... #!/bin/bash # Testing ShellScript... while true do echo "ShellScript Start Running..." ********************************** All My Shell Script Codes/Script/Commands ********************************** echo "ShellScript End Running..." exit 0 sleep 900 done Then Run this: sudo nano /etc/rc.local then add following and save. #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. sh ./ShellScript.sh & exit 0

    Read the article

  • Can't boot after recent system update

    - by Ron
    Dear all, After recent system update (I think I saw something kernelish but don't really remember) my Ubuntu becomes unbootable. When I select "Ubuntu" from the boot menu, I'm greeted by a GRUB console and I don't know what to do (typing help shows some helpful commands for gods, unfortunately I'm a mere mortal). I'm doing this on Windows XP now. How do I go back to the future? Edit: The Ubuntu was installed using WUBI

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >