Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 263/285 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Silverlight 4 Twitter Client - Part 2

    - by Max
    We will create a few classes now to help us with storing and retrieving user credentials, so that we don't ask for it every time we want to speak with Twitter for getting some information. Now the class to sorting out the credentials. We will have this class as a static so as to ensure one instance of the same. This class is mainly going to include a getter setter for username and password, a method to check if the user if logged in and another one to log out the user. You can get the code here. Now let us create another class to facilitate easy retrieval from twitter xml format results for any queries we make. This basically involves just creating a getter setter for all the values that you would like to retrieve from the xml document returned. You can get the format of the xml document from here. Here is what I've in my Status.cs data structure class. using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes;  namespace MaxTwitter.Classes { public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } } }  Now let us looking into implementing the Login.xaml.cs, first thing here is if the user is already logged in, we need to redirect the user to the homepage, this we can accomplish using the event OnNavigatedTo, which is fired when the user navigates to this particular Login page. Here you utilize the navigate to method of NavigationService to goto a different page if the user is already logged in. if (GlobalVariable.isLoggedin())         this.NavigationService.Navigate(new Uri("/Home", UriKind.Relative));  On the submit button click event, add the new event handler, which would save the perform the WebClient request and download the results as xml string. WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  The following line allows us to create a web client to create a web request to a url and get back the string response. Something that came as a great news with SL 4 for many SL developers.   WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(TwitterUsername.Text, TwitterPassword.Password);  Here in the following line, we add an event that has to be fired once the xml string has been downloaded. Here you can do all your XLINQ stuff.   myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted);   myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml"));  Now let us look at implementing the TimelineRequestCompleted event. Here we are not actually using the string response we get from twitter, I just use it to ensure the user is authenticated successfully and then save the credentials and redirect to home page. public void TimelineRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { MessageBox.Show("This application must be installed first"); }  If there is no error, we can save the credentials to reuse it later.   else { GlobalVariable.saveCredentials(TwitterUsername.Text, TwitterPassword.Password); this.NavigationService.Navigate(new System.Uri("/Home", UriKind.Relative)); } } Ok so now login page is done. Now the main thing – running this application. This credentials stuff would only work, if the application is run out of the browser. So we need fiddle with a few Silverlioght project settings to enable this. Here is how:    Right click on Silverlight > properties then check the "Enable running application out of browser".    Then click on Out-Of-Browser settings and check "Require elevated trust…" option. That's it, all done to run. Now press F5 to run the application, fix the errors if any. Then once the application opens up in browser with the login page, right click and choose install.  Once you install, it would automatically run and you can login and can see that you are redirected to the Home page. Here are the files that are related to this posts. We will look at implementing the Home page, etc… in the next post. Please post your comments and feedbacks; it would greatly help me in improving my posts!  Thanks for your time, catch you soon.

    Read the article

  • Possible SWITCH Optimization in DAX – #powerpivot #dax #tabular

    - by Marco Russo (SQLBI)
    In one of the Advanced DAX Workshop I taught this year, I had an interesting discussion about how to optimize a SWITCH statement (which could be frequently used checking a slicer, like in the Parameter Table pattern). Let’s start with the problem. What happen when you have such a statement? Sales :=     SWITCH (         VALUES ( Period[Period] ),         "Current", [Internet Total Sales],         "MTD", [MTD Sales],         "QTD", [QTD Sales],         "YTD", [YTD Sales],          BLANK ()     ) The SWITCH statement is in reality just syntax sugar for a nested IF statement. When you place such a measure in a pivot table, for every cell of the pivot table the IF options are evaluated. In order to optimize performance, the DAX engine usually does not compute cell-by-cell, but tries to compute the values in bulk-mode. However, if a measure contains an IF statement, every cell might have a different execution path, so the current implementation might evaluate all the possible IF branches in bulk-mode, so that for every cell the result from one of the branches will be already available in a pre-calculated dataset. The price for that could be high. If you consider the previous Sales measure, the YTD Sales measure could be evaluated for all the cells where it’s not required, and also when YTD is not selected at all in a Pivot Table. The actual optimization made by the DAX engine could be different in every build, and I expect newer builds of Tabular and Power Pivot to be better than older ones. However, we still don’t live in an ideal world, so it could be better trying to help the engine finding a better execution plan. One student (Niek de Wit) proposed this approach: Selection := IF (     HASONEVALUE ( Period[Period] ),     VALUES ( Period[Period] ) ) Sales := CALCULATE (     [Internet Total Sales],     FILTER (         VALUES ( 'Internet Sales'[Order Quantity] ),         'Internet Sales'[Order Quantity]             = IF (                 [Selection] = "Current",                 'Internet Sales'[Order Quantity],                 -1             )     ) )     + CALCULATE (         [MTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "MTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [QTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "QTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [YTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "YTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     ) At first sight, you might think it’s impossible that this approach could be faster. However, if you examine with the profiler what happens, there is a different story. Every original IF’s execution branch is now a separate CALCULATE statement, which applies a filter that does not execute the required measure calculation if the result of the FILTER is empty. I used the ‘Internet Sales’[Order Quantity] column in this example just because in Adventure Works it has only one value (every row has 1): in the real world, you should use a column that has a very low number of distinct values, or use a column that has always the same value for every row (so it will be compressed very well!). Because the value –1 is never used in this column, the IF comparison in the filter discharge all the values iterated in the filter if the selection does not match with the desired value. I hope to have time in the future to write a longer article about this optimization technique, but in the meantime I’ve seen this optimization has been useful in many other implementations. Please write your feedback if you find scenarios (in both Power Pivot and Tabular) where you obtain performance improvements using this technique!

    Read the article

  • Mscorlib mocking minus the attribute

    - by mehfuzh
    Mocking .net framework members (a.k.a. mscorlib) is always a daunting task. It’s the breed of static and final methods and full of surprises. Technically intercepting mscorlib members is completely different from other class libraries. This is the reason it is dealt differently. Generally, I prefer writing a wrapper around an mscorlib member (Ex. File.Delete(“abc.txt”)) and expose it via interface but that is not always an easy task if you already have years old codebase. While mocking mscorlib members first thing that comes to people’s mind is DateTime.Now. If you Google through, you will find tons of example dealing with just that. May be it’s the most important class that we can’t ignore and I will create an example using JustMock Q2 with the same. In Q2 2012, we just get rid of the MockClassAtrribute for mocking mscorlib members. JustMock is already attribute free for mocking class libraries. We radically think that vendor specific attributes only makes your code smelly and therefore decided the same for mscorlib. Now, I want to fake DateTime.Now for the following class: public class NestedDateTime { public DateTime GetDateTime() { return DateTime.Now; } } It is the simplest one that can be. The first thing here is that I tell JustMock “hey we have a DateTime.Now in NestedDateTime class that we want to mock”. To do so, during the test initialization I write this: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Mock.Replace(() => DateTime.Now).In<NestedDateTime>(x => x.GetDateTime());.csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I can also define it for all the members in the class, but that’s just a waste of extra watts. Mock.Replace(() => DateTime.Now).In<NestedDateTime>(); Now question, why should I bother doing it? The answer is that I am not using attribute and with this approach, I can mock any framework members not just File, FileInfo or DateTime. Here to note that we already mock beyond the three but when nested around a complex class, JustMock was not intercepting it correctly. Therefore, we decided to get rid of the attribute altogether fixing the issue. Finally, I write my test as usual. [TestMethod] public void ShouldAssertMockingDateTimeFromNestedClass() { var expected = new DateTime(2000, 1, 1); Mock.Arrange(() => DateTime.Now).Returns(expected); Assert.Equal(new NestedDateTime().GetDateTime(), expected); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s it, we are good. Now let me do the same for a random one, let’s say I want mock a member from DriveInfo: Mock.Replace<DriveInfo[]>(() => DriveInfo.GetDrives()).In<MsCorlibFixture>(x => x.ShouldReturnExpectedDriveWhenMocked()); Moving forward, I write my test: [TestMethod] public void ShouldReturnExpectedDriveWhenMocked() { Mock.Arrange(() => DriveInfo.GetDrives()).MustBeCalled(); DriveInfo.GetDrives(); Mock.Assert(()=> DriveInfo.GetDrives()); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here is one convention; you have to replace the mscorlib member before executing the target method that contains it. Here the call to DriveInfo is within the MsCorlibFixture therefore it should be defined during test initialization or before executing the test method. Hope this gives you the idea.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • Calculated Fields - Idiosyncracies

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Calculated Fields and some of their Idiosyncrasies Did you try to write a calculate field formula directly into the screen? Good Luck – You’ll need it! Calculated Fields are a sophisticated OOB feature of SharePoint, so you could think that they are best left to the end users – at least to the power users. But they reach their limits before the “Professionals “do, and the tough ones come back to us anyway. Back to business; the simpler the formula, the easier it is. Still, use your favorite editor to write it, then cut it and paste it to the ridiculously small window. What about complex formulae? Write them in steps! Here is a case in point and an idiosyncrasy or two. Our welders need to be certified and recertified every two years. Some of them are certifiable…., but I digress. To be certified you need to pass an eye exam, and two more tests – test A and test B. for each of those you have an expiry date. When renewed, each expiry date is advanced by two years from the date of renewal. My users wanted a visual clue so that when the supervisor looks at the list, she’ll have a KPI symbol telling her if anything expired (Red), is going to expire within the next 90 days (Yellow) or is not to be worried about (green). Not all the dates are filled and any blank date implies a complete lack of certification in the particular requirement. Obviously, I needed to figure the minimal of these 3 dates – a simple enough formula: =MIN([Date_EyeExam], {Date_TestA], [Date_TestB]). Aha! Here is idiosyncrasy #1. When one of the dates is a null, MIN(Date1, Date2) returns the non null date. Null is construed as “Far, far away”. The funny thing is that when you compare it to Today, the null is the lesser one. So a null it is less than today, but not when MIN is calculated. Now, to me the fact that the welder does not have an exam date, is synonymous with his exam being prehistoric, or at least past due. So here is what I did: Solution: Let’s set a blank date to 1/1/1800. How will we do that? Use the IF. IF([Field] rel relValue, TrueValue, FalseValue). rel is any relationship operator <, >, <=, >=, =, <>. If the field is related to the relValue as prescribed, the “IF” returns the TrueValue, otherwise it returns the FalseValue. Thus: =IF([SomeDate]="",1/1/1800,[SomeDate]) will return 1/1/1800 if the date is blank and the date itself if not. So, using this formula, if the welder missed an exam, the returned exam date will be far in the past. It would be nice if we could take such a formula and make it into a reusable function. Alas, here is a calculated field serious shortcoming: You cannot write subs and functions!! Aha, but we can use interim calculated fields! So let’s create 3 calculated fields as follows: 1: c_DateTestA as a calculated field of the date type, with the formula:  IF([Date_TestA]="",1/1/1800,[Date_TestA]) 2: c_DateTestB as a calculated field of the date type, with the formula:  IF([Date_TestB]="",1/1/1800,[Date_TestB]) 3: c_DateEyeExam as a calculated field of the date type, with the formula:  IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam]) And now use these to get c_MinDate. This is again a calculated field of type date with the formula: MIN(c_DateTestA, cDateTestB, c_DateEyeExam) Note that I missed the square parentheses. In “properly named fields – where there are no embedded spaces, we don’t need the square parentheses. I actually strongly recommend using underscores in place of spaces in all the field names in your lists. Among other things, it makes using CAML much simpler. Now, we still need to apply the KPI to this minimal date. I am going to use the available KPI graphics that come with SharePoint and are always available in your 12 hive. "/_layouts/images/kpidefault-2.gif" is the Red KPI "/_layouts/images/kpidefault-1.gif" is the Yellow KPI "/_layouts/images/kpidefault-0.gif" is the Green KPI And here is the nested IF formula that will do the trick: =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif", IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Nice! BUT when I tested, it did not work! This is Idiosyncrasy #2: A calculated field based on a calculated field based on a calculated field does not work. You have to stop at two levels! Back to the drawing board: We have to reduce by one level. How? We’ll eliminate the c_DateX items in the formula and replace them with the proper IF formulae. Notice that this needs to be done with precision. You are much better off in doing it in your favorite line editor, than inside the cramped space that SharePoint gives you. So here is the result: MIN(IF([Date_TestA]="",1/1/1800,[ Date_TestA]), IF([Date_TestB]="",1/1/1800,[ Date_TestB]), 1/1/1800), IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam])) Note that I bolded the parentheses and painted them red. They have to match for this formula to work. Now we can leave the KPI formula as is and test again. This time with SUCCESS! Conclusion: build the inner functions first, and then embed them inside the outer formulae. Do this as long as necessary. Use your favorite line editor. Limit yourself to 2 levels. That’s all folks! Almost! As soon as I finished doing all of the above, my users added yet another level of complexity. They added another test, a test that must be passed, but never expires and asked for yet another KPI, this time in Black to denote that any test is not just past due, but altogether missing. I just finished this. Let’s hope it ends here! And OH, the formula  =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif",IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Deals with “Today” and this is a subject deserving a discussion of its own!  That’s all folks?! (and this time I mean it)

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Separating text strings into a table of individual words in SQL via XML.

    - by Phil Factor
    p.MsoNormal {margin-top:0cm; margin-right:0cm; margin-bottom:10.0pt; margin-left:0cm; line-height:115%; font-size:11.0pt; font-family:"Calibri","sans-serif"; } Nearly nine years ago, Mike Rorke of the SQL Server 2005 XML team blogged ‘Querying Over Constructed XML Using Sub-queries’. I remember reading it at the time without being able to think of a use for what he was demonstrating. Just a few weeks ago, whilst preparing my article on searching strings, I got out my trusty function for splitting strings into words and something reminded me of the old blog. I’d been trying to think of a way of using XML to split strings reliably into words. The routine I devised turned out to be slightly slower than the iterative word chop I’ve always used in the past, so I didn’t publish it. It was then I suddenly remembered the old routine. Here is my version of it. I’ve unwrapped it from its obvious home in a function or procedure just so it is easy to appreciate. What it does is to chop a text string into individual words using XQuery and the good old nodes() method. I’ve benchmarked it and it is quicker than any of the SQL ways of doing it that I know about. Obviously, you can’t use the trick I described here to do it, because it is awkward to use REPLACE() on 1…n characters of whitespace. I’ll carry on using my iterative function since it is able to tell me the location of each word as a character-offset from the start, and also because this method leaves punctuation in (removing it takes time!). However, I can see other uses for this in passing lists as input or output parameters, or as return values.   if exists (Select * from sys.xml_schema_collections where name like 'WordList')   drop XML SCHEMA COLLECTION WordList go create xml schema collection WordList as ' <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="words">        <xs:simpleType>               <xs:list itemType="xs:string" />        </xs:simpleType> </xs:element> </xs:schema>'   go   DECLARE @string VARCHAR(MAX) –we'll get some sample data from the great Ogden Nash Select @String='This is a song to celebrate banks, Because they are full of money and you go into them and all you hear is clinks and clanks, Or maybe a sound like the wind in the trees on the hills, Which is the rustling of the thousand dollar bills. Most bankers dwell in marble halls, Which they get to dwell in because they encourage deposits and discourage withdrawals, And particularly because they all observe one rule which woe betides the banker who fails to heed it, Which is you must never lend any money to anybody unless they don''t need it. I know you, you cautious conservative banks! If people are worried about their rent it is your duty to deny them the loan of one nickel, yes, even one copper engraving of the martyred son of the late Nancy Hanks; Yes, if they request fifty dollars to pay for a baby you must look at them like Tarzan looking at an uppity ape in the jungle, And tell them what do they think a bank is, anyhow, they had better go get the money from their wife''s aunt or ungle. But suppose people come in and they have a million and they want another million to pile on top of it, Why, you brim with the milk of human kindness and you urge them to accept every drop of it, And you lend them the million so then they have two million and this gives them the idea that they would be better off with four, So they already have two million as security so you have no hesitation in lending them two more, And all the vice-presidents nod their heads in rhythm, And the only question asked is do the borrowers want the money sent or do they want to take it withm. Because I think they deserve our appreciation and thanks, the jackasses who go around saying that health and happi- ness are everything and money isn''t essential, Because as soon as they have to borrow some unimportant money to maintain their health and happiness they starve to death so they can''t go around any more sneering at good old money, which is nothing short of providential. '   –we now turn it into XML declare @xml_data xml(WordList)  set @xml_data='<words>'+ replace(@string,'&', '&amp;')+'</words>'    select T.ref.value('.', 'nvarchar(100)')  from (Select @xml_data.query('                      for $i in data(/words) return                      element li { $i }               '))  A(list) cross apply A.List.nodes('/li') T(ref)     …which gives (truncated, of course)…

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SCOM 2012 DNS Forwarder Availability Monitor

    - by Massimo
    Background: I have an environment with two different AD domains, each in its own forest, each with two Windows Server 2008 R2 domain controllers acting as DNS servers. There is no trust between the domains. Each DNS server manages the main DNS zone for its AD domain, and then some other zones, including the reverse lookup zone for its IP subnets; all zones are AD-integrated; all DNS servers which manages a zone are correctly listed as authoritative name servers for that zone. So, the situation is like this (using fake names and IP addresses): Domain A: DNS domain: a.dom IP subnet: 192.168.1.X DC/DNS Servers: serverA1.a.dom (192.168.1.1) and serverA2.a.dom (192.168.1.2) Authoritative zones: a.dom, 1.168.192.in-addr.arpa, somezone.local Domain B: DNS domain: b.dom IP subnet: 10.0.0.X DC/DNS Servers: serverB1.b.dom (10.0.0.1) and serverB2.b.dom (10.0.0.2) Authoritative zones: b.dom, 0.0.10.in-addr.arpa, someotherzone.local DNS servers in domain A have conditional forwarders defined for each zone managed by DNS servers in domain B, forwarding to both domain B's DNS servers; DNS servers in domain B have the opposite configuration. All forwarders are stored in Active Directory. All is working perfectly, and computers in each domain can resolve forward and reverse DNS queries for both domains, using their domain's DNS servers. The problem: I have SCOM 2012 deployed in domain A, with the SCOM agent installed on both DCs; the management packs for Active Directory and DNS Server are installed and up-to-date. I have a series of alerts like the following ones on both domain controllers; each alert is generated for each forwarded zone and for each forwarded server: Forwarder someotherzone.local (10.0.0.1) cannot resolve the host name 192.168.1.1,someotherzone.local for serverA1.a.dom Forwarder someotherzone.local (10.0.0.2) cannot resolve the host name 192.168.1.1,someotherzone.local for serverA1.a.dom Forwarder someotherzone.local (10.0.0.1) cannot resolve the host name 192.168.1.2,someotherzone.local for serverA2.a.dom Forwarder someotherzone.local (10.0.0.2) cannot resolve the host name 192.168.1.2,someotherzone.local for serverA2.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.1) cannot resolve the host name 192.168.1.1,0.0.10.in-addr.arpa for serverA1.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.2) cannot resolve the host name 192.168.1.1,0.0.10.in-addr.arpa for serverA1.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.1) cannot resolve the host name 192.168.1.2,0.0.10.in-addr.arpa for serverA2.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.2) cannot resolve the host name 192.168.1.2,0.0.10.in-addr.arpa for serverA2.a.dom The only exception is the main AD DNS zone managed by domain B's DNS servers (b.dom): for that conditional forwarder, no alert is generated and the forwarder availability monitor is green. Ok, what does this mean? What are those monitors trying to tell me? What are they checking? What's actually wrong? And why there is no error for the "b.dom" zone, which is configured in the exact same way as the other ones, both as a zone in domain B's DNS servers and as a forwarder in domain A's DNS servers?

    Read the article

  • Why does Cacti keep waiting for dead poller processes?

    - by Oliver Salzburg
    sorry for the length I am currently setting up a new Debian (6.0.5) server. I put Cacti (0.8.7g) on it yesterday and have been battling with it ever since. Initial issue The initial issue I was observing, was that my graphs weren't updating. So I checked my cacti.log and found this concerning message: POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting. That can't be good, right? So I went checking and started poller.php myself (via sudo -u www-data php poller.php --force). It will pump out a lot of message (which all look like what I would expect) and then hang for a minute. After that 1 minute, it will loop the following message: Waiting on 1 of 1 pollers. This goes on for 4 more minutes until the process is forcefully ended for running longer than 300s. So far so good I went on for a good hour trying to determine what poller might still be running, until I got to the conclusion that there simply is no running poller. Debugging I checked poller.php to see how that warning is issued and why. On line 368, Cacti will retrieve the number of finished processes from the database and use that value to calculate how many processes are still running. So, let's see that value! I added the following debug code into poller.php: print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; Result This will print the following within seconds of starting poller.php: Finished: 0 - Started: 1 Waiting on 1 of 1 pollers. Finished: 1 - Started: 1 So the values are being read and are valid. Until we get to the part where it keeps looping: Finished: - Started: 1 Waiting on 1 of 1 pollers. Suddenly, the value is gone. Why? Putting var_dump() in there confirms the issue: NULL Finished: - Started: 1 Waiting on 1 of 1 pollers. The return value is NULL. How can that be when querying SELECT COUNT()...? (SELECT COUNT() should always return one result row, shouldn't it?) More debugging So I went into lib\database.php and had a look at that db_fetch_cell(). A bit of testing confirmed, that the result set is actually empty. So I added my own database query code in there to see what that would do: $finished_processes = db_fetch_cell("SELECT count(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'"); print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; $mysqli = new mysqli("localhost","cacti","cacti","cacti"); $result = $mysqli->query("SELECT COUNT(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00';"); $row = $result->fetch_assoc(); var_dump( $row ); This will output Finished: - Started: 1 array(1) { ["COUNT(*)"]=> string(1) "2" } Waiting on 1 of 1 pollers. So, the data is there and can be accessed without any problems, just not with the method Cacti is using? Double-check that! I enabled MySQL logging to make sure I'm not imagining things. Sure enough, when the error message is looped, the cacti.log reads as if it was querying like mad: 06/29/2012 08:44:00 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:01 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:02 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" But none of these queries are logged my MySQL. Yet, when I add my own database query code, it shows up just fine. What the heck is going on here?

    Read the article

  • BIND DNS Master with Zerigo Slaves - BIND won't update the slave servers

    - by Anthony
    I've tried to resolve this myself and have looked through Google and Stack but haven't found the answer I'm looking for. Currently on a VPS server I have BIND DNS installed as a MASTER DNS Server. I use Zerigo's DNS service as SLAVE servers for public use: The Master doesn't receive queries - It's job is to simply create and modify DNS entries locally of which the SLAVE use to serve. Here is an excerpt of the BIND log, I set it to INFO event logging: 14-Apr-2012 23:00:00.234 general: info: received control channel command 'reload' 14-Apr-2012 23:00:00.234 general: info: loading configuration from 'C:\DNS\BIND\etc\named.conf' 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv4 port range: [1024, 65535] 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv6 port range: [1024, 65535] 14-Apr-2012 23:00:00.250 general: info: reloading configuration succeeded 14-Apr-2012 23:00:00.250 general: info: reloading zones succeeded 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR ended 14-Apr-2012 23:16:23.015 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:23.031 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR ended As you can see there is no problem with Zerigo's DNS servers requesting new DNS data, when I force a reload that is; I don't believe, as per the way they are set as SLAVE, that they poll for changes. However the problem is the other way; the MASTER is not updating the SLAVE servers when reload is run (on the MASTER); it is a batch on a 15 minute timer. Below is my NAMED.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; acl "trusted" { 174.36.24.251/32; 68.71.141.22/32; localhost; }; options { version "not currently available"; directory "C:\DNS\BIND\etc"; allow-query { trusted; }; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; logging{ channel simple_log { file "C:\DNS\BIND\logging\bind.log" versions 3 size 5m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; }; zone "ajmakeup.com" in { type master; file "c:\dns\BIND\zones\db.ajmakeup.com.txt"; allow-transfer { 174.36.24.251; 68.71.141.22; }; allow-update { none; }; }; Does my problem have something to do with 'allow-query' under options? You will notice that 'allow-transfer' is set explicitly on each DNS zone. In case you need it here is my RNDC.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; server localhost { key "rndc-key"; }; Note: I am using WebsitePanel as my hosting panel and is such why it creates the zone enteries the way it does. Although I know I can change this behaviour, I do not wish to do so nor do I believe is the root of the problem. Thanks for your help.

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • Unable to Mange DNS via MMC

    - by IT Helpdesk Team Manager
    When trying to access the DNS service on Microsoft Windows Server 2003 (Build 3790) domain controller/schema master via the MMC DNS snap in or locally via the DNS MMC from Administrative tools I'm getting a red "X" through the icon for the DNS Server. The inability to access DNS management via MMC happens on all domain controllers as well. We've looked at items such as the DHCP client not being started, incorrect DNS setup ( the machine points at itself and another DC ), the DNS service not running ( it is and all DNS queries via NSLOOKUP work correctly ), dslint returns the correct information and functions as expected. There is the following entry in the DNS event log: The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 0000051b dnscmd fails with RPC server unavailable yet RPC is started: C:\Documents and Settings\Administrator.DOMAIN>dnscmd /Info Info query failed status = 1722 (0x000006ba) Command failed: RPC_S_SERVER_UNAVAILABLE 1722 (000006ba) DCDIAG /TEST:DNS /V /E produces the following errors: Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1753 (Type: Win32 - Description: There are no more endpoints available from the endpoint mapper.)] Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1722 (Type: Win32 - Description: The RPC server is unavailable.)] The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. A DNS query for _ldap._tcp.dc._msdcs. returns the correct results. All domain and ADS related activities are working except that I can't manage my DNS via MMC or dnscmd. Any thoughts or solutions would be greatly appreciated. EDIT: Adding Registry export per request: Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc Class Name: <NO CLASS> Last Write Time: 10/18/2012 - 2:29 PM Value 0 Name: DCOM Protocols Type: REG_MULTI_SZ Data: ncacn_ip_tcp Value 1 Name: UuidSequenceNumber Type: REG_DWORD Data: 0xb19bd0f Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: ncacn_np Type: REG_SZ Data: rpcrt4.dll Value 1 Name: ncacn_ip_tcp Type: REG_SZ Data: rpcrt4.dll Value 2 Name: ncadg_ip_udp Type: REG_SZ Data: rpcrt4.dll Value 3 Name: ncacn_http Type: REG_SZ Data: rpcrt4.dll Value 4 Name: ncacn_at_dsp Type: REG_SZ Data: rpcrt4.dll Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NameService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: DefaultSyntax Type: REG_SZ Data: 3 Value 1 Name: Endpoint Type: REG_SZ Data: \pipe\locator Value 2 Name: NetworkAddress Type: REG_SZ Data: \\. Value 3 Name: Protocol Type: REG_SZ Data: ncacn_np Value 4 Name: ServerNetworkAddress Type: REG_SZ Data: \\. Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NetBios Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\RpcProxy Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: Enabled Type: REG_DWORD Data: 0x1 Value 1 Name: ValidPorts Type: REG_SZ Data: pdc:100-5000 Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\SecurityService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: 9 Type: REG_SZ Data: secur32.dll Value 1 Name: 10 Type: REG_SZ Data: secur32.dll Value 2 Name: 14 Type: REG_SZ Data: schannel.dll Value 3 Name: 16 Type: REG_SZ Data: secur32.dll Value 4 Name: 1 Type: REG_SZ Data: secur32.dll Value 5 Name: 18 Type: REG_SZ Data: secur32.dll Value 6 Name: 68 Type: REG_SZ Data: netlogon.dll

    Read the article

  • Troubleshooting unwanted NTP Traffic

    - by Jaxaeon
    A domain controller running Windows Server 2012 is sending NTP and NETBIOS traffic to an address that has never been configured as a time provider. The server logs give no indication that any NTP traffic is failing. The only place I see any evidence of this traffic is in pfSense system logs: (Blocked) Jun 9 08:48:50 DOMAIN 10.0.1.100:123 192.128.127.254:123 UDP (Blocked) Jun 9 08:48:53 DOMAIN 10.0.1.100:137 192.128.127.254:137 UDP As far as I can tell the NTP service is working normally otherwise: DC2.domain.com[10.0.1.101:123]: ICMP: 0ms delay NTP: -0.0131705s offset from DC1.domain.com RefID: DC1.domain.com [10.0.1.100] Stratum: 3 DC1.domain.com *** PDC ***[10.0.1.100:123]: ICMP: 0ms delay NTP: +0.0000000s offset from DC1.domain.com RefID: clock1.albyny.inoc.net [64.246.132.14] Stratum: 2 The time provider NtpClient is currently receiving valid time data from 1.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->204.2.134.163:123). The time provider NtpClient is currently receiving valid time data from 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). The time service is now synchronizing the system time with the time source 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). I've been inside and out of the NTP configuration and cannot find any reason for this traffic. Reverse DNS points the destination address to nothing.attdns.com. pinging nothing.attdns.com from the domain controller in question leads to a response from loopback (127.0.0.2) which makes my head hurt. Any ideas? EDIT1: It should probably be noted that after a dns flush, nslookup 192.128.127.254 returns nothing.attdns.com. 192.128.127.254 is not present in domain.com DNS records. The attdns.com domain is not present in cached lookups. 127.in-addr.arpa is clean of any funkyness. EDIT2: The loopback ping response from nothing.attdns.com is possibly unrelated. Machines on other networks are also displaying this behavior. EDIT3: As mentioned in the comments, I tracked the problem network adapter back to my pfSense VM hosted in esxi 5.5 (I know shame on me for virtualizing a firewall). pfSense was configured to use DC1.domain.com as its primary time provider, but upon changing it back to pool.ntp.org the problem persists. pfSense logs give no indication of NTP misconfiguration. Everywhere I can think to look this VM is identified as 10.0.1.253, so I still have no idea why it’s sending NTP requests as 192.128… Since this firewall was a temporary solution to a problem that no longer exists so I am going to decommission it. EDIT4: The queries were coming from another machine sharing the same virtual adapter as the firewall. The machine has two local adapters: one for LAN, and the other for attached hardware that uses an Ethernet connection. That hardware sits in the the mystery subnet, and the machine is broadcasting NTP requests over both adapters.

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • squid3 auth thru samba using ntlm to AD doesn't work

    - by derty
    some users here are spending to much time exploring the WWW. So big boss whats to get this under control. We use a squid3 just for some security reason and chace benefits. and now i'm trying to set up a new proxy on a different server (Debian 6) Permissions are defined in AC and the squid3 should get the auth thru samba/winbind by using the ntlm protocol. but i'll get all the time Access, denited. it only works by using LDAP but thats not the way i need it. here some log and confs squid access.log 1326878095.784 1 192.168.15.27 TCP_DENIED/407 4049 GET http://at.msn.com/? -NONE/- text/html 1326878095.791 1 192.168.15.27 TCP_DENIED/407 4294 GET http://at.msn.com/? - NONE/- text/html 1326878095.803 9 192.168.15.27 TCP_DENIED/403 4028 GET http://at.msn.com/? kavan NONE/- text/html 1326878095.848 0 192.168.15.27 TCP_DENIED/403 3881 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878100.279 0 192.168.15.27 TCP_DENIED/403 3735 GET http://www.google.at/ kavan NONE/- text/html 1326878100.296 0 192.168.15.27 TCP_DENIED/403 3870 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878155.700 0 192.168.15.27 TCP_DENIED/407 4072 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.705 2 192.168.15.27 TCP_DENIED/407 4317 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.709 3 192.168.15.27 TCP_DENIED/403 4026 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml kavan NONE/- text/html squid chace 2012/01/18 10:12:49| Creating Swap Directories 2012/01/18 10:12:49| Starting Squid Cache version 3.1.6 for x86_64-pc-linux-gnu... 2012/01/18 10:12:49| Process ID 17236 2012/01/18 10:12:49| With 65535 file descriptors available 2012/01/18 10:12:49| Initializing IP Cache... 2012/01/18 10:12:49| DNS Socket created at [::], FD 7 2012/01/18 10:12:49| DNS Socket created at 0.0.0.0, FD 8 2012/01/18 10:12:49| Adding nameserver 192.168.15.2 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.19 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.1 from /etc/resolv.conf 2012/01/18 10:12:49| Adding domain schoenbrunn.local from /etc/resolv.conf 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'ntlm_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'squid_kerb_auth' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_group' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| Unlinkd pipe opened on FD 73 2012/01/18 10:12:49| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2012/01/18 10:12:49| Store logging disabled 2012/01/18 10:12:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2012/01/18 10:12:49| Target number of buckets: 1008 2012/01/18 10:12:49| Using 8192 Store buckets 2012/01/18 10:12:49| Max Mem size: 262144 KB 2012/01/18 10:12:49| Max Swap size: 0 KB 2012/01/18 10:12:49| Using Least Load store dir selection 2012/01/18 10:12:49| Set Current Directory to /var/spool/squid3 2012/01/18 10:12:49| Loaded Icons. 2012/01/18 10:12:49| Accepting HTTP connections at [::]:3128, FD 74. 2012/01/18 10:12:49| HTCP Disabled. 2012/01/18 10:12:49| Squid modules loaded: 0 2012/01/18 10:12:49| Adaptation support is off. 2012/01/18 10:12:49| Ready to serve requests. 2012/01/18 10:12:50| storeLateRelease: released 0 objects smb.conf # Domain Authntication Settings workgroup = <WORKGROUP> security = ads password server = <DOMAINNAME>.LOCAL realm = <DOMAINNAME>.LOCAL ldap ssl = no # logging log level = 5 max log size = 50 # logs split per machine log file = /var/log/samba/%m.log # max 50KB per log file, then rotate ; max log size = 50 # User settings username map = /etc/samba/smbusers idmap uid = 10000-20000000 idmap gid = 10000-20000000 idmap backend = ad ; template primary group = <ad group> template shell = /sbin/nologin # Winbind Settings winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind netsted groups = Yes winbind nested groups = Yes winbind cache time = 10 winbind use default domain = Yes #Other Globals unix charset = LOCALE server string = <SERVERNAME> load printers = no printing = cups cups options = raw ; printcap name = /etc/printcap #obtain list of printers automatically on SystemV ; printcap name = lpstat ; printing = cups squid.conf auth_param ntlm program /usr/bin/ntlm_auth --require-membership-of=<DOMAINNAME>\\INTERNETZ --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b "dc=<dcname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f sAMAccountName=%s -h 192.168.15.19:3268 auth_param basic realm "Proxy Authentifizierung. Bitte geben Sie Ihren Benutzername und Ihr Passwort ein!" #means insert you PW in an other language - # external_acl_type InetGroup %LOGIN /usr/lib/squid3/squid_ldap_group -R -b "dc=<domainname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f "(&(objectclass=person)(sAMAccountName=%v) (memberof=cn=%a,cn=internetz,dc=<domainname>,dc=local))" -h 192.168.15.19:3268 auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d auth_param negotiate children 10 auth_param negotiate keep_alive on acl localnet proxy_auth REQUIRED acl InetAccess external InetGroup Internetz http_access allow InetAccess http_access deny all acl auth proxy_auth REQUIRED http_access allow auth and a very suspicious is that by adding the proxy server to the Domain i see 2 new entries in the PC one with the original computer-name leopoldine and one with leopoldine CNF:f8efa4c4-ff0e-4217-939d-f1523b43464d ?!? I tried a lot, really... but i stuck on this problem... i actually i even reinstalled all dependent programs and reconfigured them from default. Group exists and has me in it. Firefox running on the old proxy and i use IE for testing the new one. But i'll get all the time Access-Denited and to be honest i'm quite a beginner, so please don't be to prude. I'll interested in improving, i'll get the information we need to fix this but i started working 2 month ago and got only 1 1/2 year's training and not a single sec. in linux ;)

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >