Search Results

Search found 37654 results on 1507 pages for 'function prototypes'.

Page 1450/1507 | < Previous Page | 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457  | Next Page >

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • SET game odds simulation (MATLAB)

    - by yuk
    Here is an interesting problem for your weekend. :) I recently find the great card came - SET. Briefly, there are 81 cards with the four features: symbol (oval, squiggle or diamond), color (red, purple or green), number (one, two or three) or shading (solid, striped or open). The task is to find (from selected 12 cards) a SET of 3 cards, in which each of the four features is either all the same on each card or all different on each card (no 2+1 combination). In my free time I've decided to code it in MATLAB to find a solution and to estimate odds of having a set in randomly selected cards. Here is the code: %% initialization K = 12; % cards to draw NF = 4; % number of features (usually 3 or 4) setallcards = unique(nchoosek(repmat(1:3,1,NF),NF),'rows'); % all cards: rows - cards, columns - features setallcomb = nchoosek(1:K,3); % index of all combinations of K cards by 3 %% test tic NIter=1e2; % number of test iterations setexists = 0; % test results holder % C = progress('init'); % if you have progress function from FileExchange for d = 1:NIter % C = progress(C,d/NIter); % cards for current test setdrawncardidx = randi(size(setallcards,1),K,1); setdrawncards = setallcards(setdrawncardidx,:); % find all sets in current test iteration for setcombidx = 1:size(setallcomb,1) setcomb = setdrawncards(setallcomb(setcombidx,:),:); if all(arrayfun(@(x) numel(unique(setcomb(:,x))), 1:NF)~=2) % test one combination setexists = setexists + 1; break % to find only the first set end end end fprintf('Set:NoSet = %g:%g = %g:1\n', setexists, NIter-setexists, setexists/(NIter-setexists)) toc 100-1000 iterations are fast, but be careful with more. One million iterations takes about 15 hours on my home computer. Anyway, with 12 cards and 4 features I've got around 13:1 of having a set. This is actually a problem. The instruction book said this number should be 33:1. And it was recently confirmed by Peter Norvig. He provides the Python code, but I didn't test it. So can you find an error?

    Read the article

  • fileToSTring keeps on returning " "

    - by karikari
    I managed to get this code to compile with out error. But somehow it did not return the strings that I wrote inside file1.txt and file.txt that I pass its path through str1 and str2. My objective is to use this open source library to measure the similarity between strings contains inside 2 text files. Inside the its Javadoc, its states that ... public static java.lang.StringBuffer fileToString(java.io.File f) private call to load a file and return its content as a string. Parameters: f - a file for which to load its content Returns: a string containing the files contents or "" if empty or not present Here's is my modified code trying to use the FileLoader function, but fails to return the strings inside the file. The end result keeps on returning me the "" . I do not know where is my fault: package uk.ac.shef.wit.simmetrics; import java.io.File; import uk.ac.shef.wit.simmetrics.similaritymetrics.*; import uk.ac.shef.wit.simmetrics.utils.*; public class SimpleExample { public static void main(final String[] args) { if(args.length != 2) { usage(); } else { String str1 = "arg[0]"; String str2 = "arg[1]"; File objFile1 = new File(str1); File objFile2 = new File(str2); FileLoader obj1 = new FileLoader(); FileLoader obj2 = new FileLoader(); str1 = obj1.fileToString(objFile1).toString(); str2 = obj2.fileToString(objFile2).toString(); System.out.println(str1); System.out.println(str2); AbstractStringMetric metric = new MongeElkan(); //this single line performs the similarity test float result = metric.getSimilarity(str1, str2); //outputs the results outputResult(result, metric, str1, str2); } } private static void outputResult(final float result, final AbstractStringMetric metric, final String str1, final String str2) { System.out.println("Using Metric " + metric.getShortDescriptionString() + " on strings \"" + str1 + "\" & \"" + str2 + "\" gives a similarity score of " + result); } private static void usage() { System.out.println("Performs a rudimentary string metric comparison from the arguments given.\n\tArgs:\n\t\t1) String1 to compare\n\t\t2)String2 to compare\n\n\tReturns:\n\t\tA standard output (command line of the similarity metric with the given test strings, for more details of this simple class please see the SimpleExample.java source file)"); } }

    Read the article

  • No value given for one or more required parameters in connection initialisation

    - by DarkJaff
    Hi everyone, I have an C# form application that use an access database. This application works perfectly in debug and release. It works on all version of Windows. But it crash on one computer with Windows 7. The message I got is: System.Data.OleDb.OleDbException: No value given for one or more required parameters. The function that is supposely not working is this: public void InitConnection(string strFile) { string strConnection = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};User Id=admin;Password=;", strFile); m_conn = new OleDbConnection(strConnection); try { //On vérifie si la connexion n'est pas ouverte if (m_conn.State != ConnectionState.Open) { m_conn.Open(); m_VCoeffModele = GetModeleCoeff(); } } catch (Exception err) { throw err; } } I think it's something related to the connection string but why only on that computer. Thanks for your help! DarkJaff EDIT Here is the complete error message: See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ***** Exception Text ******* System.Data.OleDb.OleDbException: No value given for one or more required parameters. at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at DatabaseLayer.DatabaseFacade.GetModeleCoeff() at DatabaseLayer.DatabaseFacade.InitConnection(String strFile) at CalculatriceCHW.ListeMesure.OuvrirFichier(String strFichier) at CalculatriceCHW.ListeMesure.nouveauFichierMenu_Click(Object sender, EventArgs e) at System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e) at System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e) at System.Windows.Forms.ToolStripItem.FireEventInteractive(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStripItem.FireEvent(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.ToolStripDropDown.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ToolStrip.WndProc(Message& m) at System.Windows.Forms.ToolStripDropDown.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Scala actors: receive vs react

    - by jqno
    Let me first say that I have quite a lot of Java experience, but have only recently become interested in functional languages. Recently I've started looking at Scala, which seems like a very nice language. However, I've been reading about Scala's Actor framework in Programming in Scala, and there's one thing I don't understand. In chapter 30.4 it says that using react instead of receive makes it possible to re-use threads, which is good for performance, since threads are expensive in the JVM. Does this mean that, as long as I remember to call react instead of receive, I can start as many Actors as I like? Before discovering Scala, I've been playing with Erlang, and the author of Programming Erlang boasts about spawning over 200,000 processes without breaking a sweat. I'd hate to do that with Java threads. What kind of limits am I looking at in Scala as compared to Erlang (and Java)? Also, how does this thread re-use work in Scala? Let's assume, for simplicity, that I have only one thread. Will all the actors that I start run sequentially in this thread, or will some sort of task-switching take place? For example, if I start two actors that ping-pong messages to each other, will I risk deadlock if they're started in the same thread? According to Programming in Scala, writing actors to use react is more difficult than with receive. This sounds plausible, since react doesn't return. However, the book goes on to show how you can put a react inside a loop using Actor.loop. As a result, you get loop { react { ... } } which, to me, seems pretty similar to while (true) { receive { ... } } which is used earlier in the book. Still, the book says that "in practice, programs will need at least a few receive's". So what am I missing here? What can receive do that react cannot, besides return? And why do I care? Finally, coming to the core of what I don't understand: the book keeps mentioning how using react makes it possible to discard the call stack to re-use the thread. How does that work? Why is it necessary to discard the call stack? And why can the call stack be discarded when a function terminates by throwing an exception (react), but not when it terminates by returning (receive)? I have the impression that Programming in Scala has been glossing over some of the key issues here, which is a shame, because otherwise it's a truly excellent book.

    Read the article

  • Display lable character by character using javascript

    - by Muhammad Sajid
    Hi, I am creating Hang a Man using PHP, MySQL & Javascript. Every thing is going perfect, I get a word randomly from DB show it as a label apply it a class where display = none. Now when I click on a Character that character become disable fine which i actually want but the label-character does not show. My code is: <link href="style.css" rel="stylesheet" type="text/css" media="screen" /> <?php include( 'config.php' ); $question = questions(); // Get question. $alpha = alphabats(); // Get alphabets. ?> <script language="javascript"> function clickMe( name ){ var question = '<?php echo $question; ?>'; var questionLen = <?php echo strlen($question); ?>; for ( var i = 0; i < questionLen; i++ ){ if ( question[i] == name ){ var link = document.getElementById( name ); link.style.display = 'none'; var label = document.getElementById( 'questionLabel' + i ); label.style.display = 'none'; } } } </script> <div> <table align="center" style="border:solid 1px"> <tr> <?php for ( $i = 0; $i < 26; $i++ ) { echo "<td><a href='#' id=$alpha[$i] name=$alpha[$i] onclick=clickMe('$alpha[$i]');>". $alpha[$i] ."</a>&nbsp;</td>"; } ?> </tr> </table> <br/> <table align="center" style="border:solid 1px"> <tr> <?php for ( $i = 0; $i < strlen($question); $i++ ) { echo "<td class='question'><label id=questionLabel$i >". $question[$i] ."</label></td>"; } ?> </tr> </table> </div>

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • Moving MVC2 Helpers to MVC3 razor view engine

    - by Dai Bok
    Hi, In my MVC 2 site, I have an html helper, that I use to add javascripts for my pages. In my master page I have the main javascripts I want to include, and then in the aspx pages, I include page specific javascripts. So for example, my Site.Master has something like this: .... <head> <%=html.renderScripts() %> </head> ... //core scripts for main page <%html.AddScript("/scripts/jquery.js") %> <%html.AddScript("/scripts/myLib.js") %> .... Then in the child aspx page, I may also want to include other scripts. ... //the page specific script I want to use <% html.AddScript("/scripts/register.aspx.js") %> ... So when the full page gets rendered the javascript files are all collected and rendered in the head by sitemaster placeholder function RenderScripts. This works fine. Now with MVC 3 and razor view engine, they layout pages behave differently, because now my page level javascripts are not rendered/included. Now all I see the LayoutMaster contents. How do I get the solution wo workwith MVC 3 and the razor view engine. (The helper has already been re-written to return a HTMLString ;-)) For reference: my MasterLayout looks like this: ... ... <head> @{ Html.AddJavaScript("/Scripts/jQuery.js"); Html.AddJavaScript("/Scripts/myLib.js"); } //Render scripts @html.RenderScripts() </head> .... and the child page looks like this: @{ Layout = "~/Views/Shared/MasterLayout.cshtml"; ViewBag.Title = "Child Page"; Html.AddJavaScript("/Scripts/register.aspx.js"); } .... <div>some html </div> Thanks for your help. Edit = Just to explain, if this question is not clear enough. When producing a "page" I collect all the javascript files the designers want to use, by using the html.addJavascript("filename.js") and store these in a dictionary - (1) stops people adding duplicate js files - then finally when the page is ready to render, I write out all the javascript files neatly in the header. (2) - this helper helps keep JS in one place, and prevents designers from adding javascript files all over the place. This used to work fine with Master/SiteMaster Pages in mvc 2. but how can I achieve this with razor?

    Read the article

  • adding a class when link is clicked from Wordpress loop

    - by Carey Estes
    I am trying to isolate and add a class to a clicked anchor tag. The tags are getting pulled from a Wordpress loop. I can write JQuery to remove the "static" class, but it is removing the class from all tags in the div rather than just the one clicked and not adding the "active" class. Here is the WP loop <div class="more"> <a class="static" href="<?php bloginfo('template_url'); ?>/work/">ALL</a> <?php foreach ($tax_terms as $tax_term) { echo '<a class="static" href="' . esc_attr(get_term_link($tax_term, $taxonomy)) . '" title="' . sprintf( __( "View all posts in %s" ), $tax_term->name ) . '" ' . '>' . $tax_term->name.'</a>'; } ?> </div> Generates this html: <div class="more"> <a class="static" href="#">ALL</a> <a class="static" href="#">Things</a> <a class="static" href="#"> More Things</a> <a class="static" href="#">Objects</a> <a class="static" href="#">Goals</a> <a class="static" href="#">Books</a> <a class="static" href="#">Drawings</a> <a class="static" href="#">Thoughts</a> </div> JQuery: $("div.more a").on("click", function () { $("a.static").removeClass("static"); $(this).addClass("active"); }); I have reviwed the other similar questions here and here, but neither solution is working for me. Can this be done with JQuery or should I put a click event in the html inline anchor? It looks like it is working just for a second until the page reloads.

    Read the article

  • How to send hashed data in SOAP request body?

    - by understack
    I want to imitate following request using Zend_Soap_Client for this wsdl. <SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:clr="http://schemas.microsoft.com/soap/encoding/clr/1.0" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Header> <h3:__MethodSignature xsi:type="SOAP-ENC:methodSignature" xmlns:h3="http://schemas.microsoft.com/clr/soap/messageProperties" SOAP-ENC:root="1" xmlns:a2="http://schemas.microsoft.com/clr/ns/System.Collections">xsd:string a2:Hashtable</h3:__MethodSignature> </SOAP-ENV:Header> <SOAP-ENV:Body> <i4:ReturnDataSet id="ref-1" xmlns:i4="http://schemas.microsoft.com/clr/nsassem/Interface.IRptSchedule/Interface"> <sProc id="ref-5">BU</sProc> <ht href="#ref-6"/> </i4:ReturnDataSet><br/> <a2:Hashtable id="ref-6" xmlns:a2="http://schemas.microsoft.com/clr/ns/System.Collections"> <LoadFactor>0.72</LoadFactor> <Version>1</Version> <Comparer xsi:null="1"/> <HashCodeProvider xsi:null="1"/> <HashSize>11</HashSize> <Keys href="#ref-7"/> <Values href="#ref-8"/> </a2:Hashtable> <SOAP-ENC:Array id="ref-7" SOAP-ENC:arrayType="xsd:anyType[1]"> <item id="ref-9" xsi:type="SOAP-ENC:string">@AppName</item> </SOAP-ENC:Array><br/> <SOAP-ENC:Array id="ref-8" SOAP-ENC:arrayType="xsd:anyType[1]"> <item id="ref-10" xsi:type="SOAP-ENC:string">AAGENT</item> </SOAP-ENC:Array> </SOAP-ENV:Body> </SOAP-ENV:Envelope> It seems somehow I've to send hashed "ref-7" and "ref-8" array embedded inside body? How can I do this? Function ReturnDataSet takes two parameters, how can I send additional "ref-7" and "ref-8" array data? $client = new SoapClient($wsdl_url, array('soap_version' => SOAP_1_1)); $result = $client->ReturnDataset("BU", $ht); I don't know how to set $ht, so that hashed data is sent as different body entry. Thanks.

    Read the article

  • replacing div content with a click using jquery

    - by Joel
    I see this question asked a lot in the related questions, but my need seems very simple compared to those examples, and sadly I'm just still too new at js to know what to remove...so at the risk of being THAT GUY, I'm going to ask my question... I'm trying to switch out the div contents in a box depending on the button pushed. Right now I have it working using the animatedcollapse.toggle function, but it doesn't look very good. I want to replace it with a basic fade in on click and fade in new content on next button. Basic idea: <div> <ul> <li><a href="this will fade in the first_div"></li> <li><a href="this will fade in the second_div"></li> <li><a href="this will fade in the third_div"></li> </ul> <div class="first_container"> <ul> <li>stuff</li> <li>stuff</li> <li>stuff</li> </ul> </div> <div class="second_container"> <ul> <li>stuff</li> <li>stuff</li> <li>stuff</li> </ul> </div> <div class="third_container"> <ul> <li>stuff</li> <li>stuff</li> <li>stuff</li> </ul> </div> </div> I've got everything working with the animated collapse, but it's just an ugly effect for this situation, so I want to change it out. Thanks! Joel

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Adding a valuetype to IDL, compile and it fails with "No factory found"

    - by jim
    I can't figure out why the client keeps complaining about the not finding the factory method. I've tried the IDL with and without the "factory" keyword and that didn't change the behavior. The SDMGeoVT IDL matches other objects used (which run successfully). The SDMGeoVT classes generated match other generated classes in regards to inheritance and methods. The IDL is as follows; The idlj compiler runs w/o error. I implement the function on the server and I see the server code run and serialize the object over the wire (the server code runs fine). The client bombs with the following stack trace (the first couple of lines is from the jacORB library). I've created a small app just to compile and test the code (ArrayClient & ArrayServer). The base app (from the jacORB demo) works fine. I've tried using the base class OFBaseVT and a single object (SDMGeoVT vs the list return) and have the same issue. 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) org.omg.CORBA.MARSHAL: No factory found for: IDL:pl/SDMGeoVT:1.0 at org.jacorb.orb.CDRInputStream.read_untyped_value(CDRInputStream.java:2906) at org.jacorb.orb.CDRInputStream.read_typed_value(CDRInputStream.java:3082) at org.jacorb.orb.CDRInputStream.read_value(CDRInputStream.java:2679) at com.helloworld.pl.SDMGeoVTHelper.read(SDMGeoVTHelper.java:106) at com.helloworld.pl.SDMGeoVTListHelper.read(SDMGeoVTListHelper.java:51) at com.helloworld.pl._PLManagerIFStub.getSDMGeos(_PLManagerIFStub.java:28) at com.helloworld.ArrayClient.<init>(ArrayClient.java:40) at com.helloworld.ArrayClient.main(ArrayClient.java:125) valuetype SDMGeoVT : common::OFBaseVT{ private string sdmName; private string zip; private string atz; private long long primaryDeptId; private string deptName; factory instance(in string name,in string ZIP,in string ATZ,in long long primaryDeptId,in string deptName,in string name); string getZIP(); void setZIP(in string ZIP); string getATZ(); void setATZ(in string ATZ); long long getPrimaryDeptId(); void setPrimaryDeptId(in long long primaryDeptId); string getDeptName(); void setDeptName(in string deptName); }; typedef sequence<SDMGeoVT> SDMGeoVTList; interface PLManagerIF : PublicManagerIF { pl::SDMGeoVTList getSDMGeos(in framework::ITransactionHandle tHandle, in long long productionLocationId); };

    Read the article

  • wrong operator() overload called

    - by user313202
    okay, I am writing a matrix class and have overloaded the function call operator twice. The core of the matrix is a 2D double array. I am using the MinGW GCC compiler called from a windows console. the first overload is meant to return a double from the array (for viewing an element). the second overload is meant to return a reference to a location in the array (for changing the data in that location. double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element I am writing a testing routine and have discovered that the "viewing" overload never gets called. for some reason the compiler "defaults" to calling the overload that returns a reference when the following printf() statement is used. fprintf(outp, "%6.2f\t", testMatD(i,j)); I understand that I'm insulting the gods by writing my own matrix class without using vectors and testing with C I/O functions. I will be punished thoroughly in the afterlife, no need to do it here. Ultimately I'd like to know what is going on here and how to fix it. I'd prefer to use the cleaner looking operator overloads rather than member functions. Any ideas? -Cal the matrix class: irrelevant code omitted class Matrix { public: double getElement(int row, int col)const; //returns the element at row,col //operator overloads double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element private: //data members double **array; //pointer to data array }; double Matrix::getElement(int row, int col)const{ //transform indices into true coordinates (from sorted coordinates //only row needs to be transformed (user can only sort by row) row = sortedArray[row]; result = array[usrZeroRow+row][usrZeroCol+col]; return result; } //operator overloads double Matrix::operator()(int row, int col) const { //this overload is used when viewing an element return getElement(row,col); } double &Matrix::operator()(int row, int col){ //this overload is used when placing an element return array[row+usrZeroRow][col+usrZeroCol]; } The testing program: irrelevant code omitted int main(void){ FILE *outp; outp = fopen("test_output.txt", "w+"); Matrix testMatD(5,7); //construct 5x7 matrix //some initializations omitted fprintf(outp, "%6.2f\t", testMatD(i,j)); //calls the wrong overload }

    Read the article

  • R glm standard error estimate differences to SAS PROC GENMOD

    - by Michelle
    I am converting a SAS PROC GENMOD example into R, using glm in R. The SAS code was: proc genmod data=data0 namelen=30; model boxcoxy=boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ/dist=normal; FREQ REPLICATE_VAR; run; My R code is: parmsg2 <- glm(boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ , data=data0, family=gaussian, weights = REPLICATE_VAR) When I use summary(parmsg2) I get the same coefficient estimates as in SAS, but my standard errors are wildly different. The summary output from SAS is: Name df Estimate StdErr LowerWaldCL UpperWaldCL ChiSq ProbChiSq Intercept 1 6.5007436 .00078884 6.4991975 6.5022897 67911982 0 agegrp4 1 .64607262 .00105425 .64400633 .64813891 375556.79 0 agegrp5 1 .4191395 .00089722 .41738099 .42089802 218233.76 0 agegrp6 1 -.22518765 .00083118 -.22681672 -.22355857 73401.113 0 agegrp7 1 -1.7445189 .00087569 -1.7462352 -1.7428026 3968762.2 0 agegrp8 1 -2.2908855 .00109766 -2.2930369 -2.2887342 4355849.4 0 race1 1 -.13454883 .00080672 -.13612997 -.13296769 27817.29 0 race3 1 -.20607036 .00070966 -.20746127 -.20467944 84319.131 0 weekend 1 .0327884 .00044731 .0319117 .03366511 5373.1931 0 seq2 1 -.47509583 .00047337 -.47602363 -.47416804 1007291.3 0 Scale 1 2.9328613 .00015586 2.9325559 2.9331668 -127 The summary output from R is: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.50074 0.10354 62.785 < 2e-16 AGEGRP4 0.64607 0.13838 4.669 3.07e-06 AGEGRP5 0.41914 0.11776 3.559 0.000374 AGEGRP6 -0.22519 0.10910 -2.064 0.039031 AGEGRP7 -1.74452 0.11494 -15.178 < 2e-16 AGEGRP8 -2.29089 0.14407 -15.901 < 2e-16 RACE1 -0.13455 0.10589 -1.271 0.203865 RACE3 -0.20607 0.09315 -2.212 0.026967 WEEKEND 0.03279 0.05871 0.558 0.576535 SEQ -0.47510 0.06213 -7.646 2.25e-14 The importance of the difference in the standard errors is that the SAS coefficients are all statistically significant, but the RACE1 and WEEKEND coefficients in the R output are not. I have found a formula to calculate the Wald confidence intervals in R, but this is pointless given the difference in the standard errors, as I will not get the same results. Apparently SAS uses a ridge-stabilized Newton-Raphson algorithm for its estimates, which are ML. The information I read about the glm function in R is that the results should be equivalent to ML. What can I do to change my estimation procedure in R so that I get the equivalent coefficents and standard error estimates that were produced in SAS? To update, thanks to Spacedman's answer, I used weights because the data are from individuals in a dietary survey, and REPLICATE_VAR is a balanced repeated replication weight, that is an integer (and quite large, in the order of 1000s or 10000s). The website that describes the weight is here. I don't know why the FREQ rather than the WEIGHT command was used in SAS. I will now test by expanding the number of observations using REPLICATE_VAR and rerunning the analysis.

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • Android spinner inside a custom control - OnItemSelectedListener does not trigger

    - by Idan
    I am writing a custom control that extends LinearLayout. Inside that control I am using a spinner to let the user select an item from a list. The problem I have is that the OnItemSelectedListener event does not fire. When moving the same code to an Activity/Fragment all is working just fine. I have followed some answers that was given to others asking about the same issue, and nothing helped. still the event does not fire. This is my code after I followed the answers that suggested to put the spinner inside my layout XML instead of by code. I am getting the same result when I try to just "new Spinner(ctx)"... layout XML: <Spinner android:id="@+id/accSpinner" android:layout_width="0dip" android:layout_height="0dip" /> Initialization function of the control (called on the control constructor): private void init() { LayoutInflater layoutInflater = LayoutInflater.from(mContext); mAccountBoxView = layoutInflater.inflate(R.layout.control_accountselector, null); mTxtAccount = (TextView)mAccountBoxView.findViewById(R.id.txtAccount); mSpinner = (Spinner)mAccountBoxView.findViewById(R.id.accSpinner); mAccountBoxView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { mSpinner.performClick(); } }); setSpinner(); addView(mAccountBoxView); } private void setSpinner() { ArrayAdapter<String> dataAdapter = new ArrayAdapter<String>(mContext, android.R.layout.simple_spinner_item, mItems); dataAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); mSpinner.setAdapter(dataAdapter); mSpinner.setOnItemSelectedListener(new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { String selectedItem = mItems.get(position); handleSelectedItem(selectedItem); } @Override public void onNothingSelected(AdapterView<?> parent) { } }); } The spinner raises just fine when i touch my control and the list of items is there as it should. When I click an item the spinner closes but I am never getting to onItemSelected nor onNothingSelected.. Any ideas?

    Read the article

  • CakePHP HABTM: Editing one item casuses HABTM row to get recreated, destroys extra data

    - by leo-the-manic
    I'm having trouble with my HABTM relationship in CakePHP. I have two models like so: Department HABTM Location. One large company has many buildings, and each building provides a limited number of services. Each building also has its own webpage, so in addition to the HABTM relationship itself, each HABTM row also has a url field where the user can visit to find additional information about the service they're interested and how it operates at the building they're interested in. I've set up the models like so: <?php class Location extends AppModel { var $name = 'Location'; var $hasAndBelongsToMany = array( 'Department' => array( 'with' => 'DepartmentsLocation', 'unique' => true ) ); } ?> <?php class Department extends AppModel { var $name = 'Department'; var $hasAndBelongsToMany = array( 'Location' => array( 'with' => 'DepartmentsLocation', 'unique' => true ) ); } ?> <?php class DepartmentsLocation extends AppModel { var $name = 'DepartmentsLocation'; var $belongsTo = array( 'Department', 'Location' ); // I'm pretty sure this method is unrelated. It's not being called when this error // occurs. Its purpose is to prevent having two HABTM rows with the same location // and department. function beforeSave() { // kill any existing rows with same associations $this->log(__FILE__ . ": killing existing HABTM rows", LOG_DEBUG); $result = $this->find('all', array("conditions" => array("location_id" => $this->data['DepartmentsLocation']['location_id'], "department_id" => $this->data['DepartmentsLocation']['department_id']))); foreach($result as $row) { $this->delete($row['DepartmentsLocation']['id']); } return true; } } ?> The controllers are completely uninteresting. The problem: If I edit the name of a Location, all of the DepartmentsLocations that were linked to that Location are re-created with empty URLs. Since the models specify that unique is true, this also causes all of the newer rows to overwrite the older rows, which essentially destroys all of the URLs. I would like to know two things: Can I stop this? If so, how? And, on a less technical and more whiney note: Why does this even happen? It seems bizarre to me that editing a field through Cake should cause so much trouble, when I can easily go through phpMyAdmin, edit the Location name there, and get exactly the result I would expect. Why does CakePHP touch the HABTM data when I'm just editing a field on a row? It's not even a foreign key!

    Read the article

  • Implementing rounded corners on slide down navigation menu

    - by Nick
    I am working on the slide down menu you can see here. I have rounded corners on both ul#navigation and ul.subnavigation. When the submenu slides down it is possible to see the border at the bottom of ul.subnavigation overlap with the content of ul#navigation, when I would like it to slide down smoothly, without the 'flicker'. I am aware that this issue is caused by the rounded corners. I need ul.subnavigation to cover the rounded corners at the bottom of ul#navigation when the menu drops down, without seeing the double border-bottom issue. I hope this is clear! Code is below. Thanks, Nick HTML <ul id="navigation"> <li class="dropdown"><a href="#">menu</a> <ul class="sub_navigation"> <li><a href="#">home</a></li> <li><a href="#">help</a></li> <li><a href="#">disable tips</a></li> </ul> </li> </ul> JQUERY $('.dropdown').hover(function() { $(this).find('.sub_navigation').slideToggle(); });? CSS ul#navigation, ul.sub_navigation { margin:0; padding:0; list-style-type:none; min-width:100px; background-color: white; font-size:15px; font-family: Trebuchet MS; text-align: center; -khtml-border-radius: 0 0 5px 5px; -moz-border-radius: 0 0 5px 5px; -webkit-border-radius: 0 0 5px 5px; border-radius: 0 0 5px 5px; border:1px black solid; border-top:none; } ul.sub_navigation { margin-left:-1px; position: absolute; top:28px; } ul#navigation { float:left; position:absolute; top:0; } ul#navigation li { float:left; min-width:100px; } ul.sub_navigation { position:absolute; display:none; } ul.sub_navigation li { clear:both; } a, a:active, a:visited { display:block; padding:7px; }

    Read the article

  • Advantage of creating a generic repository vs. specific repository for each object?

    - by LuckyLindy
    We are developing an ASP.NET MVC application, and are now building the repository/service classes. I'm wondering if there are any major advantages to creating a generic IRepository interface that all repositories implement, vs. each Repository having its own unique interface and set of methods. For example: a generic IRepository interface might look like (taken from this answer): public interface IRepository : IDisposable { T[] GetAll<T>(); T[] GetAll<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors); void Delete<T>(T entity); void Add<T>(T entity); int SaveChanges(); DbTransaction BeginTransaction(); } Each Repository would implement this interface (e.g. CustomerRepository:IRepository, ProductRepository:IRepository, etc). The alternate that we've followed in prior projects would be: public interface IInvoiceRepository : IDisposable { EntityCollection<InvoiceEntity> GetAllInvoices(int accountId); EntityCollection<InvoiceEntity> GetAllInvoices(DateTime theDate); InvoiceEntity GetSingleInvoice(int id, bool doFetchRelated); InvoiceEntity GetSingleInvoice(DateTime invoiceDate, int accountId); //unique InvoiceEntity CreateInvoice(); InvoiceLineEntity CreateInvoiceLine(); void SaveChanges(InvoiceEntity); //handles inserts or updates void DeleteInvoice(InvoiceEntity); void DeleteInvoiceLine(InvoiceLineEntity); } In the second case, the expressions (LINQ or otherwise) would be entirely contained in the Repository implementation, whoever is implementing the service just needs to know which repository function to call. I guess I don't see the advantage of writing all the expression syntax in the service class and passing to the repository. Wouldn't this mean easy-to-messup LINQ code is being duplicated in many cases? For example, in our old invoicing system, we call InvoiceRepository.GetSingleInvoice(DateTime invoiceDate, int accountId) from a few different services (Customer, Invoice, Account, etc). That seems much cleaner than writing the following in multiple places: rep.GetSingle(x => x.AccountId = someId && x.InvoiceDate = someDate.Date); The only disadvantage I see to using the specific approach is that we could end up with many permutations of Get* functions, but this still seems preferable to pushing the expression logic up into the Service classes. What am I missing?

    Read the article

  • How can give validation on custom registration form in drupal?

    - by Nitz
    Hey Friends, I have created custom registration form in drupal 6. i have used changed the behavior of the drupal registration page by adding this code in themes. template file function earthen_theme($existing, $type, $theme, $path) { return array( // tell Drupal what template to use for the user register form 'user_register' => array( 'arguments' => array('form' => NULL), 'template' => 'user_register', // this is the name of the template ), ); } and my user_register.tpl.php file is looking like this... //php tag starts from here $forms['access'] = array( '#type' = 'fieldset', '#title' = t('Access log settings'), ); $form['my_text_field']=array( '#type' = 'textfield', '#default_value' = $node-title, '#size' = 30, '#maxlength' = 50, '#required' = TRUE ); <div id="registration_form"><br> <div class="field"> <?php print drupal_render($form['my_text_field']); // prints the username field ?> </div> <div class="field"> <?php print drupal_render($form['account']['name']); // prints the username field ?> </div> <div class="field"> <?php print drupal_render($form['account']['pass']); // print the password field ?> </div> <div class="field"> <?php print drupal_render($form['account']['email']); // print the password field ?> </div> <div class="field"> <?php print drupal_render($form['submit']); // print the submit button ?> </div> </div> How to make validation on "my_text_field" which is custmized. and exactly i want that as soon as user click on my_text_field then datetime picker should be open and whichever date user select, that date should be value in my_text_field. so guys help. Thanks in advance, nitish Panchjanya Corporation

    Read the article

  • Flex - Issues with linkbar dataprovider

    - by BS_C3
    Hello Community! I'm having some issues displaying a linkbar. The data I need to display is in a XML file. However, I couldn't get the linkbar to display a xmllist (I did indeed read that you cannot set a xmlllist as a linkbar dataprovider... ). So, I'm transforming the xmllist in a array of objects. Here is some code. XML file: <data> <languages> <language id="en"> <label>ENGLISH</label> <source></source> </language> <language id="fr"> <label>FRANCAIS</label> <source></source> </language> <language id="es"> <label>ESPAÑOL</label> <source></source> </language> <language id="jp"> <label>JAPANESE</label> <source></source> </language> </languages> </data> AS Code that transforms the xmllist in an array of objects: private function init():void { var list:XMLList = generalData.languages.language; var arr:ArrayCollection = new ArrayCollection; var obj:Object; for(var i:int = 0; i<list.length(); i++) { obj = new Object; obj.id = list[i].@id; obj.label = list[i].label; obj.source = list[i].source; arr.addItemAt(obj, arr.length); } GlobalData.instance.languages = arr.toArray(); } Linkbar code: <mx:HBox horizontalAlign="right" width="100%"> <mx:LinkBar id="language" dataProvider="{GlobalData.instance.languages}" separatorWidth="3" labelField="{label}"/> </mx:HBox> The separator is not displaying, and neither do the label. But the array is populated (I tested it). Thanks for any help you can provide =) Regards, BS_C3

    Read the article

  • Does this mySQL Stored Procedure Work?

    - by Laxmidi
    Hi, I got the following stored procedure from http://dev.mysql.com/doc/refman/5.1/en/functions-that-test-spatial-relationships-between-geometries.html Does this work? CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End; Usage: SET @point = PointFromText('POINT(5 5)') ; SET @polygon = PolyFromText('POLYGON((0 0,10 0,10 10,0 10))') ; SELECT myWithin(@point, @polygon) AS result ; I'm using phpMyAdmin and it blows up when using stored procedures. If this one works, then I'll try to figure out how to call it in php instead. Thanks, Laxmidi

    Read the article

< Previous Page | 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457  | Next Page >