Search Results

Search found 44052 results on 1763 pages for 'pass by value'.

Page 417/1763 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • evaluating a code of a graph [migrated]

    - by mazen.r.f
    This is relatively a long code,if you have the tolerance and the will to find out how to make this code work then take a look please, i will appreciate your feed back. i have spent two days trying to come up with a code to represent a graph , then calculate the shortest path using dijkastra algorithm , but i am not able to get the right result , even the code runs without errors , but the result is not correct , always i am getting 0. briefly,i have three classes , Vertex, Edge, Graph , the Vertex class represents the nodes in the graph and it has id and carried ( which carry the weight of the links connected to it while using dijkastra algorithm ) and a vector of the ids belong to other nodes the path will go through before arriving to the node itself , this vector is named previous_nodes. the Edge class represents the edges in the graph it has two vertices ( one in each side ) and a wight ( the distance between the two vertices ). the Graph class represents the graph , it has two vectors one is the vertices included in this graph , and the other is the edges included in the graph. inside the class Graph there is a method its name shortest takes the sources node id and the destination and calculates the shortest path using dijkastra algorithm, and i think that it is the most important part of the code. my theory about the code is that i will create two vectors one for the vertices in the graph i will name it vertices and another vector its name is ver_out it will include the vertices out of calculation in the graph, also i will have two vectors of type Edge , one its name edges for all the edges in the graph and the other its name is track to contain temporarily the edges linked to the temporarily source node in every round , after the calculation of every round the vector track will be cleared. in main() i created five vertices and 10 edges to simulate a graph , the result of the shortest path supposedly to be 4 , but i am always getting 0 , that means i am having something wrong in my code , so if you are interesting in helping me find my mistake and how to make the code work , please take a look. the way shortest work is as follow at the beginning all the edges will be included in the vector edges , we select the edges related to the source and put them in the vector track , then we iterate through track and add the wight of every edge to the vertex (node ) related to it ( not the source vertex ) , then after we clear track and remove the source vertex from the vector vertices and select a new source , and start over again select the edges related to the new source , put them in track , iterate over edges in tack , adding the weights to the corresponding vertices then remove this vertex from the vector vertices, and clear track , and select a new source , and so on . here is the code. #include<iostream> #include<vector> #include <stdlib.h> // for rand() using namespace std; class Vertex { private: unsigned int id; // the name of the vertex unsigned int carried; // the weight a vertex may carry when calculating shortest path vector<unsigned int> previous_nodes; public: unsigned int get_id(){return id;}; unsigned int get_carried(){return carried;}; void set_id(unsigned int value) {id = value;}; void set_carried(unsigned int value) {carried = value;}; void previous_nodes_update(unsigned int val){previous_nodes.push_back(val);}; void previous_nodes_erase(unsigned int val){previous_nodes.erase(previous_nodes.begin() + val);}; Vertex(unsigned int init_val = 0, unsigned int init_carried = 0) :id (init_val), carried(init_carried) // constructor { } ~Vertex() {}; // destructor }; class Edge { private: Vertex first_vertex; // a vertex on one side of the edge Vertex second_vertex; // a vertex on the other side of the edge unsigned int weight; // the value of the edge ( or its weight ) public: unsigned int get_weight() {return weight;}; void set_weight(unsigned int value) {weight = value;}; Vertex get_ver_1(){return first_vertex;}; Vertex get_ver_2(){return second_vertex;}; void set_first_vertex(Vertex v1) {first_vertex = v1;}; void set_second_vertex(Vertex v2) {second_vertex = v2;}; Edge(const Vertex& vertex_1 = 0, const Vertex& vertex_2 = 0, unsigned int init_weight = 0) : first_vertex(vertex_1), second_vertex(vertex_2), weight(init_weight) { } ~Edge() {} ; // destructor }; class Graph { private: std::vector<Vertex> vertices; std::vector<Edge> edges; public: Graph(vector<Vertex> ver_vector, vector<Edge> edg_vector) : vertices(ver_vector), edges(edg_vector) { } ~Graph() {}; vector<Vertex> get_vertices(){return vertices;}; vector<Edge> get_edges(){return edges;}; void set_vertices(vector<Vertex> vector_value) {vertices = vector_value;}; void set_edges(vector<Edge> vector_ed_value) {edges = vector_ed_value;}; unsigned int shortest(unsigned int src, unsigned int dis) { vector<Vertex> ver_out; vector<Edge> track; for(unsigned int i = 0; i < edges.size(); ++i) { if((edges[i].get_ver_1().get_id() == vertices[src].get_id()) || (edges[i].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[i]); edges.erase(edges.begin()+i); } }; for(unsigned int i = 0; i < track.size(); ++i) { if(track[i].get_ver_1().get_id() != vertices[src].get_id()) { track[i].get_ver_1().set_carried((track[i].get_weight()) + track[i].get_ver_2().get_carried()); track[i].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else { track[i].get_ver_2().set_carried((track[i].get_weight()) + track[i].get_ver_1().get_carried()); track[i].get_ver_2().previous_nodes_update(vertices[src].get_id()); } } for(unsigned int i = 0; i < vertices.size(); ++i) if(vertices[i].get_id() == src) vertices.erase(vertices.begin() + i); // removing the sources vertex from the vertices vector ver_out.push_back (vertices[src]); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int i = 0; i < vertices.size(); ++i) if((vertices[i].get_carried() < vertices[src].get_carried()) && (vertices[i].get_id() != dis)) src = vertices[i].get_id(); //while(!edges.empty()) for(unsigned int round = 0; round < vertices.size(); ++round) { for(unsigned int k = 0; k < edges.size(); ++k) { if((edges[k].get_ver_1().get_id() == vertices[src].get_id()) || (edges[k].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[k]); edges.erase(edges.begin()+k); } }; for(unsigned int n = 0; n < track.size(); ++n) if((track[n].get_ver_1().get_id() != vertices[src].get_id()) && (track[n].get_ver_1().get_carried() > (track[n].get_ver_2().get_carried() + track[n].get_weight()))) { track[n].get_ver_1().set_carried((track[n].get_weight()) + track[n].get_ver_2().get_carried()); track[n].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else if(track[n].get_ver_2().get_carried() > (track[n].get_ver_1().get_carried() + track[n].get_weight())) { track[n].get_ver_2().set_carried((track[n].get_weight()) + track[n].get_ver_1().get_carried()); track[n].get_ver_2().previous_nodes_update(vertices[src].get_id()); } for(unsigned int t = 0; t < vertices.size(); ++t) if(vertices[t].get_id() == src) vertices.erase(vertices.begin() + t); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int tt = 0; tt < edges.size(); ++tt) { if(vertices[tt].get_carried() < vertices[src].get_carried()) { src = vertices[tt].get_id(); } } } return vertices[dis].get_carried(); } }; int main() { cout<< "Hello, This is a graph"<< endl; vector<Vertex> vers(5); vers[0].set_id(0); vers[1].set_id(1); vers[2].set_id(2); vers[3].set_id(3); vers[4].set_id(4); vector<Edge> eds(10); eds[0].set_first_vertex(vers[0]); eds[0].set_second_vertex(vers[1]); eds[0].set_weight(5); eds[1].set_first_vertex(vers[0]); eds[1].set_second_vertex(vers[2]); eds[1].set_weight(9); eds[2].set_first_vertex(vers[0]); eds[2].set_second_vertex(vers[3]); eds[2].set_weight(4); eds[3].set_first_vertex(vers[0]); eds[3].set_second_vertex(vers[4]); eds[3].set_weight(6); eds[4].set_first_vertex(vers[1]); eds[4].set_second_vertex(vers[2]); eds[4].set_weight(2); eds[5].set_first_vertex(vers[1]); eds[5].set_second_vertex(vers[3]); eds[5].set_weight(5); eds[6].set_first_vertex(vers[1]); eds[6].set_second_vertex(vers[4]); eds[6].set_weight(7); eds[7].set_first_vertex(vers[2]); eds[7].set_second_vertex(vers[3]); eds[7].set_weight(1); eds[8].set_first_vertex(vers[2]); eds[8].set_second_vertex(vers[4]); eds[8].set_weight(8); eds[9].set_first_vertex(vers[3]); eds[9].set_second_vertex(vers[4]); eds[9].set_weight(3); unsigned int path; Graph graf(vers, eds); path = graf.shortest(2, 4); cout<< path << endl; return 0; }

    Read the article

  • Model View Control Issue: Null Pointer Initialization Question

    - by David Dimalanta
    Good morning again. This is David. Please, I need an urgent help regarding control model view where I making a code that uniquely separating into groups: An Activity Java Class to Display the Interface A View and Function Java Class for Drawing Cards and Display it on the Activity Class The problem is that the result returns a Null Pointer Exception. I have initialize for the ID for Text View and Image View. Under this class "draw_deck.java". Please help me. Here's my code for draw_deck.java: package com.bodapps.inbetween.model; import android.content.Context; import android.view.View; import android.widget.ImageView; import android.widget.TextView; import com.bodapps.inbetween.R; public class draw_deck extends View { public TextView count_label; public ImageView draw_card; private int count; public draw_deck(Context context) { super(context); // TODO Auto-generated constructor stub //I have initialized two widgets for ID. I still don't get it why I got forced closed by Null Pointer Exception thing. draw_card = (ImageView) findViewById(R.id.IV_Draw_Card); count_label = (TextView) findViewById(R.id.Text_View_Count_Card); } public void draw(int s, int c, String strSuit, String strValue, Pile pile, Context context) { //super(context); //Just printing the card drawn from pile int suit, value = 1; draw_card = (ImageView) findViewById(R.id.IV_Draw_Card); count_label = (TextView) findViewById(R.id.Text_View_Count_Card); Card card; if(!pile.isEmpty()) //Setting it to IF statement displays the card one by one. { card = pile.drawFromPile(); //Need to check first if card is null. if (card != null) { //draws an extra if (card != null) { //Get suit of card to print out. suit = card.getSuit(); switch (suit) { case CardInfo.DIAMOND: strSuit = "DIAMOND"; s=0; break; case CardInfo.HEART: strSuit = "HEART"; s=1; break; case CardInfo.SPADE: strSuit = "SPADE"; s=2; break; case CardInfo.CLUB: strSuit = "CLUB"; s=3; break; } //Get value of card to print out. value = card.getValue(); switch (value) { case CardInfo.ACE: strValue = "ACE"; c=0; break; case CardInfo.TWO: c=1; break; case CardInfo.THREE: strValue = "THREE"; c=2; break; case CardInfo.FOUR: strValue = "FOUR"; c=3; break; case CardInfo.FIVE: strValue = "FIVE"; c=4; break; case CardInfo.SIX: strValue = "SIX"; c=4; break; case CardInfo.SEVEN: strValue = "SEVEN"; c=4; break; case CardInfo.EIGHT: strValue = "EIGHT"; c=4; break; case CardInfo.NINE: strValue = "NINE"; c=4; break; case CardInfo.TEN: strValue = "TEN"; c=4; break; case CardInfo.JACK: strValue = "JACK"; c=4; break; case CardInfo.QUEEN: strValue = "QUEEN"; c=4; break; case CardInfo.KING: strValue = "KING"; c=4; break; } } } }// //Below two lines of code, this is where issued the Null Pointer Exception. draw_card.setImageResource(deck[s][c]); count_label.setText(new StringBuilder(strValue).append(" of ").append(strSuit).append(String.valueOf(" " + count++)).toString()); } //Choice of Suits in a Deck public Integer[][] deck = { //Array Group 1 is [0][0] (No. of Cards: 4 - DIAMOND) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, //Array Group 2 is [1][0] (No. of Cards: 4 - HEART) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, //Array Group 3 is [2][0] (No. of Cards: 4 - SPADE) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, //Array Group 4 is [3][0] (No. of Cards: 4 - CLUB) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, }; } And this one of the activity class, Player_Mode_2.java: package com.bodapps.inbetween; import java.util.Random; import android.app.Activity; import android.app.Dialog; import android.content.Context; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.TextView; import android.widget.Toast; import com.bodapps.inbetween.model.Card; import com.bodapps.inbetween.model.Pile; import com.bodapps.inbetween.model.draw_deck; /* * * Public class for Two-Player mode. * */ public class Player_Mode_2 extends Activity { //Image Views private ImageView draw_card; private ImageView player_1; private ImageView player_2; private ImageView icon; //Buttons private Button set_deck; //Edit Texts private EditText enter_no_of_decks; //text Views private TextView count_label; //Integer Data Types private int no_of_cards, count; private int card_multiplier; //Contexts final Context context = this; //Pile Model public Pile pile; //Card Model public Card card; //create View @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.play_2_player_mode); //-----[ Search for Views ]----- //Initialize for Image View draw_card = (ImageView) findViewById(R.id.IV_Draw_Card); player_1 = (ImageView) findViewById(R.id.IV_Player_1_Card); player_2 = (ImageView) findViewById(R.id.IV_Player_2_Card); //Initialize for Text view or Label count_label = (TextView) findViewById(R.id.Text_View_Count_Card); //-----[ Adding Values ]----- //Integer Values count = 0; no_of_cards = 0; //-----[ Adding Dialog ]----- //Initializing Dialog final Dialog deck_dialog = new Dialog(context); deck_dialog.setContentView(R.layout.dialog); deck_dialog.setTitle("Deck Dialog"); //-----[ Initializing Views for Dialog's Contents ]----- //Initialize for Edit Text enter_no_of_decks = (EditText) deck_dialog.findViewById(R.id.Edit_Text_Set_Number_of_Decks); //Initialize for Button set_deck = (Button) deck_dialog.findViewById(R.id.Button_Deck); //-----[ Setting onClickListener() ]----- //Set Event Listener for Image view draw_card.setOnClickListener(new Draw_Card_Model()); //Set Event Listener for Setting the Deck set_deck.setOnClickListener(new OnClickListener() { public void onClick(View v) { if(card_multiplier <= 8) { //Use "Integer.parseInt()" method to instantly convert from String to int value. card_multiplier = Integer.parseInt(enter_no_of_decks.getText().toString()); //Shuffling cards... pile = new Pile(card_multiplier); //Multiply no. of decks //Dismiss or close the dialog. deck_dialog.dismiss(); } else { Toast.makeText(getApplicationContext(), "Please choose a number from 1 to 8.", Toast.LENGTH_SHORT).show(); } } }); //Show dialog. deck_dialog.show(); } //Shuffling the Array public void Shuffle_Cards(Integer[][] Shuffle_Deck) { Random random = new Random(); for(int i = Shuffle_Deck[no_of_cards].length - 1; i >=0; i--) { int Index = random.nextInt(i + 1); //Simple Swapping Integer swap = Shuffle_Deck[card_multiplier-1][Index]; Shuffle_Deck[card_multiplier-1][Index] = Shuffle_Deck[card_multiplier-1][i]; Shuffle_Deck[card_multiplier-1][i] = swap; } } //Private Class for Random Card Draw private class Draw_Card_Model implements OnClickListener { public void onClick(View v) { //Just printing the card drawn from pile int suit = 0, value = 0; String strSuit = "", strValue = ""; draw_deck draw = new draw_deck(context); //This line is where issued the Null Pointer Exception. if (count == card_multiplier*52) { // A message shows up when all cards are draw out. Toast.makeText(getApplicationContext(), "All cards have been used up.", Toast.LENGTH_SHORT).show(); draw_card.setEnabled(false); } else { draw.draw(suit, value, strSuit, strValue, pile, context); count_label.setText(count); //This is where I got force closed error, although "int count" have initialized the number. This was supposed to accept in the setText() method. count++; } } } } Take note that the issues on Null Pointer Exception is the Image View and the Edit Text. I got to test it. Thanks. If you have any info about my question, let me know it frankly.

    Read the article

  • Evaluating code for a graph [migrated]

    - by mazen.r.f
    This is relatively long code. Please take a look at this code if you are still willing to do so. I will appreciate your feedback. I have spent two days trying to come up with code to represent a graph, calculating the shortest path using Dijkstra's algorithm. But I am not able to get the right result, even though the code runs without errors. The result is not correct and I am always getting 0. I have three classes: Vertex, Edge, and Graph. The Vertex class represents the nodes in the graph and it has id and carried (which carry the weight of the links connected to it while using Dijkstra's algorithm) and a vector of the ids belong to other nodes the path will go through before arriving to the node itself. This vector is named previous_nodes. The Edge class represents the edges in the graph and has two vertices (one in each side) and a width (the distance between the two vertices). The Graph class represents the graph. It has two vectors, where one is the vertices included in this graph, and the other is the edges included in the graph. Inside the class Graph, there is a method named shortest() that takes the sources node id and the destination and calculates the shortest path using Dijkstra's algorithm. I think that it is the most important part of the code. My theory about the code is that I will create two vectors, one for the vertices in the graph named vertices, and another vector named ver_out (it will include the vertices out of calculation in the graph). I will also have two vectors of type Edge, where one is named edges (for all the edges in the graph), and the other is named track (to temporarily contain the edges linked to the temporary source node in every round). After the calculation of every round, the vector track will be cleared. In main(), I've created five vertices and 10 edges to simulate a graph. The result of the shortest path supposedly is 4, but I am always getting 0. That means I have something wrong in my code. If you are interesting in helping me find my mistake and making the code work, please take a look. The way shortest work is as follow: at the beginning, all the edges will be included in the vector edges. We select the edges related to the source and put them in the vector track, then we iterate through track and add the width of every edge to the vertex (node) related to it (not the source vertex). After that, we clear track and remove the source vertex from the vector vertices and select a new source. Then we start over again and select the edges related to the new source, put them in track, iterate over edges in track, adding the weights to the corresponding vertices, then remove this vertex from the vector vertices. Then clear track, and select a new source, and so on. #include<iostream> #include<vector> #include <stdlib.h> // for rand() using namespace std; class Vertex { private: unsigned int id; // the name of the vertex unsigned int carried; // the weight a vertex may carry when calculating shortest path vector<unsigned int> previous_nodes; public: unsigned int get_id(){return id;}; unsigned int get_carried(){return carried;}; void set_id(unsigned int value) {id = value;}; void set_carried(unsigned int value) {carried = value;}; void previous_nodes_update(unsigned int val){previous_nodes.push_back(val);}; void previous_nodes_erase(unsigned int val){previous_nodes.erase(previous_nodes.begin() + val);}; Vertex(unsigned int init_val = 0, unsigned int init_carried = 0) :id (init_val), carried(init_carried) // constructor { } ~Vertex() {}; // destructor }; class Edge { private: Vertex first_vertex; // a vertex on one side of the edge Vertex second_vertex; // a vertex on the other side of the edge unsigned int weight; // the value of the edge ( or its weight ) public: unsigned int get_weight() {return weight;}; void set_weight(unsigned int value) {weight = value;}; Vertex get_ver_1(){return first_vertex;}; Vertex get_ver_2(){return second_vertex;}; void set_first_vertex(Vertex v1) {first_vertex = v1;}; void set_second_vertex(Vertex v2) {second_vertex = v2;}; Edge(const Vertex& vertex_1 = 0, const Vertex& vertex_2 = 0, unsigned int init_weight = 0) : first_vertex(vertex_1), second_vertex(vertex_2), weight(init_weight) { } ~Edge() {} ; // destructor }; class Graph { private: std::vector<Vertex> vertices; std::vector<Edge> edges; public: Graph(vector<Vertex> ver_vector, vector<Edge> edg_vector) : vertices(ver_vector), edges(edg_vector) { } ~Graph() {}; vector<Vertex> get_vertices(){return vertices;}; vector<Edge> get_edges(){return edges;}; void set_vertices(vector<Vertex> vector_value) {vertices = vector_value;}; void set_edges(vector<Edge> vector_ed_value) {edges = vector_ed_value;}; unsigned int shortest(unsigned int src, unsigned int dis) { vector<Vertex> ver_out; vector<Edge> track; for(unsigned int i = 0; i < edges.size(); ++i) { if((edges[i].get_ver_1().get_id() == vertices[src].get_id()) || (edges[i].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[i]); edges.erase(edges.begin()+i); } }; for(unsigned int i = 0; i < track.size(); ++i) { if(track[i].get_ver_1().get_id() != vertices[src].get_id()) { track[i].get_ver_1().set_carried((track[i].get_weight()) + track[i].get_ver_2().get_carried()); track[i].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else { track[i].get_ver_2().set_carried((track[i].get_weight()) + track[i].get_ver_1().get_carried()); track[i].get_ver_2().previous_nodes_update(vertices[src].get_id()); } } for(unsigned int i = 0; i < vertices.size(); ++i) if(vertices[i].get_id() == src) vertices.erase(vertices.begin() + i); // removing the sources vertex from the vertices vector ver_out.push_back (vertices[src]); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int i = 0; i < vertices.size(); ++i) if((vertices[i].get_carried() < vertices[src].get_carried()) && (vertices[i].get_id() != dis)) src = vertices[i].get_id(); //while(!edges.empty()) for(unsigned int round = 0; round < vertices.size(); ++round) { for(unsigned int k = 0; k < edges.size(); ++k) { if((edges[k].get_ver_1().get_id() == vertices[src].get_id()) || (edges[k].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[k]); edges.erase(edges.begin()+k); } }; for(unsigned int n = 0; n < track.size(); ++n) if((track[n].get_ver_1().get_id() != vertices[src].get_id()) && (track[n].get_ver_1().get_carried() > (track[n].get_ver_2().get_carried() + track[n].get_weight()))) { track[n].get_ver_1().set_carried((track[n].get_weight()) + track[n].get_ver_2().get_carried()); track[n].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else if(track[n].get_ver_2().get_carried() > (track[n].get_ver_1().get_carried() + track[n].get_weight())) { track[n].get_ver_2().set_carried((track[n].get_weight()) + track[n].get_ver_1().get_carried()); track[n].get_ver_2().previous_nodes_update(vertices[src].get_id()); } for(unsigned int t = 0; t < vertices.size(); ++t) if(vertices[t].get_id() == src) vertices.erase(vertices.begin() + t); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int tt = 0; tt < edges.size(); ++tt) { if(vertices[tt].get_carried() < vertices[src].get_carried()) { src = vertices[tt].get_id(); } } } return vertices[dis].get_carried(); } }; int main() { cout<< "Hello, This is a graph"<< endl; vector<Vertex> vers(5); vers[0].set_id(0); vers[1].set_id(1); vers[2].set_id(2); vers[3].set_id(3); vers[4].set_id(4); vector<Edge> eds(10); eds[0].set_first_vertex(vers[0]); eds[0].set_second_vertex(vers[1]); eds[0].set_weight(5); eds[1].set_first_vertex(vers[0]); eds[1].set_second_vertex(vers[2]); eds[1].set_weight(9); eds[2].set_first_vertex(vers[0]); eds[2].set_second_vertex(vers[3]); eds[2].set_weight(4); eds[3].set_first_vertex(vers[0]); eds[3].set_second_vertex(vers[4]); eds[3].set_weight(6); eds[4].set_first_vertex(vers[1]); eds[4].set_second_vertex(vers[2]); eds[4].set_weight(2); eds[5].set_first_vertex(vers[1]); eds[5].set_second_vertex(vers[3]); eds[5].set_weight(5); eds[6].set_first_vertex(vers[1]); eds[6].set_second_vertex(vers[4]); eds[6].set_weight(7); eds[7].set_first_vertex(vers[2]); eds[7].set_second_vertex(vers[3]); eds[7].set_weight(1); eds[8].set_first_vertex(vers[2]); eds[8].set_second_vertex(vers[4]); eds[8].set_weight(8); eds[9].set_first_vertex(vers[3]); eds[9].set_second_vertex(vers[4]); eds[9].set_weight(3); unsigned int path; Graph graf(vers, eds); path = graf.shortest(2, 4); cout<< path << endl; return 0; }

    Read the article

  • Model View Control Issue: Null Pointer Initialization Question [closed]

    - by David Dimalanta
    Good morning again. This is David. Please, I need an urgent help regarding control model view where I making a code that uniquely separating into groups: An Activity Java Class to Display the Interface A View and Function Java Class for Drawing Cards and Display it on the Activity Class The problem is that the result returns a Null Pointer Exception. I have initialize for the ID for Text View and Image View. Under this class "draw_deck.java". Please help me. Here's my code for draw_deck.java: package com.bodapps.inbetween.model; import android.content.Context; import android.view.View; import android.widget.ImageView; import android.widget.TextView; import com.bodapps.inbetween.R; public class draw_deck extends View { public TextView count_label; public ImageView draw_card; private int count; public draw_deck(Context context) { super(context); // TODO Auto-generated constructor stub //I have initialized two widgets for ID. I still don't get it why I got forced closed by Null Pointer Exception thing. draw_card = (ImageView) findViewById(R.id.IV_Draw_Card); count_label = (TextView) findViewById(R.id.Text_View_Count_Card); } public void draw(int s, int c, String strSuit, String strValue, Pile pile, Context context) { //super(context); //Just printing the card drawn from pile int suit, value = 1; draw_card = (ImageView) findViewById(R.id.IV_Draw_Card); count_label = (TextView) findViewById(R.id.Text_View_Count_Card); Card card; if(!pile.isEmpty()) //Setting it to IF statement displays the card one by one. { card = pile.drawFromPile(); //Need to check first if card is null. if (card != null) { //draws an extra if (card != null) { //Get suit of card to print out. suit = card.getSuit(); switch (suit) { case CardInfo.DIAMOND: strSuit = "DIAMOND"; s=0; break; case CardInfo.HEART: strSuit = "HEART"; s=1; break; case CardInfo.SPADE: strSuit = "SPADE"; s=2; break; case CardInfo.CLUB: strSuit = "CLUB"; s=3; break; } //Get value of card to print out. value = card.getValue(); switch (value) { case CardInfo.ACE: strValue = "ACE"; c=0; break; case CardInfo.TWO: c=1; break; case CardInfo.THREE: strValue = "THREE"; c=2; break; case CardInfo.FOUR: strValue = "FOUR"; c=3; break; case CardInfo.FIVE: strValue = "FIVE"; c=4; break; case CardInfo.SIX: strValue = "SIX"; c=4; break; case CardInfo.SEVEN: strValue = "SEVEN"; c=4; break; case CardInfo.EIGHT: strValue = "EIGHT"; c=4; break; case CardInfo.NINE: strValue = "NINE"; c=4; break; case CardInfo.TEN: strValue = "TEN"; c=4; break; case CardInfo.JACK: strValue = "JACK"; c=4; break; case CardInfo.QUEEN: strValue = "QUEEN"; c=4; break; case CardInfo.KING: strValue = "KING"; c=4; break; } } } }// //Below two lines of code, this is where issued the Null Pointer Exception. draw_card.setImageResource(deck[s][c]); count_label.setText(new StringBuilder(strValue).append(" of ").append(strSuit).append(String.valueOf(" " + count++)).toString()); } //Choice of Suits in a Deck public Integer[][] deck = { //Array Group 1 is [0][0] (No. of Cards: 4 - DIAMOND) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, //Array Group 2 is [1][0] (No. of Cards: 4 - HEART) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, //Array Group 3 is [2][0] (No. of Cards: 4 - SPADE) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, //Array Group 4 is [3][0] (No. of Cards: 4 - CLUB) { R.drawable.card_dummy_1, R.drawable.card_dummy_2, R.drawable.card_dummy_4, R.drawable.card_dummy_5, R.drawable.card_dummy_3 }, }; } And this one of the activity class, Player_Mode_2.java: package com.bodapps.inbetween; import java.util.Random; import android.app.Activity; import android.app.Dialog; import android.content.Context; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.TextView; import android.widget.Toast; import com.bodapps.inbetween.model.Card; import com.bodapps.inbetween.model.Pile; import com.bodapps.inbetween.model.draw_deck; /* * * Public class for Two-Player mode. * */ public class Player_Mode_2 extends Activity { //Image Views private ImageView draw_card; private ImageView player_1; private ImageView player_2; private ImageView icon; //Buttons private Button set_deck; //Edit Texts private EditText enter_no_of_decks; //text Views private TextView count_label; //Integer Data Types private int no_of_cards, count; private int card_multiplier; //Contexts final Context context = this; //Pile Model public Pile pile; //Card Model public Card card; //create View @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.play_2_player_mode); //-----[ Search for Views ]----- //Initialize for Image View draw_card = (ImageView) findViewById(R.id.IV_Draw_Card); player_1 = (ImageView) findViewById(R.id.IV_Player_1_Card); player_2 = (ImageView) findViewById(R.id.IV_Player_2_Card); //Initialize for Text view or Label count_label = (TextView) findViewById(R.id.Text_View_Count_Card); //-----[ Adding Values ]----- //Integer Values count = 0; no_of_cards = 0; //-----[ Adding Dialog ]----- //Initializing Dialog final Dialog deck_dialog = new Dialog(context); deck_dialog.setContentView(R.layout.dialog); deck_dialog.setTitle("Deck Dialog"); //-----[ Initializing Views for Dialog's Contents ]----- //Initialize for Edit Text enter_no_of_decks = (EditText) deck_dialog.findViewById(R.id.Edit_Text_Set_Number_of_Decks); //Initialize for Button set_deck = (Button) deck_dialog.findViewById(R.id.Button_Deck); //-----[ Setting onClickListener() ]----- //Set Event Listener for Image view draw_card.setOnClickListener(new Draw_Card_Model()); //Set Event Listener for Setting the Deck set_deck.setOnClickListener(new OnClickListener() { public void onClick(View v) { if(card_multiplier <= 8) { //Use "Integer.parseInt()" method to instantly convert from String to int value. card_multiplier = Integer.parseInt(enter_no_of_decks.getText().toString()); //Shuffling cards... pile = new Pile(card_multiplier); //Multiply no. of decks //Dismiss or close the dialog. deck_dialog.dismiss(); } else { Toast.makeText(getApplicationContext(), "Please choose a number from 1 to 8.", Toast.LENGTH_SHORT).show(); } } }); //Show dialog. deck_dialog.show(); } //Shuffling the Array public void Shuffle_Cards(Integer[][] Shuffle_Deck) { Random random = new Random(); for(int i = Shuffle_Deck[no_of_cards].length - 1; i >=0; i--) { int Index = random.nextInt(i + 1); //Simple Swapping Integer swap = Shuffle_Deck[card_multiplier-1][Index]; Shuffle_Deck[card_multiplier-1][Index] = Shuffle_Deck[card_multiplier-1][i]; Shuffle_Deck[card_multiplier-1][i] = swap; } } //Private Class for Random Card Draw private class Draw_Card_Model implements OnClickListener { public void onClick(View v) { //Just printing the card drawn from pile int suit = 0, value = 0; String strSuit = "", strValue = ""; draw_deck draw = new draw_deck(context); //This line is where issued the Null Pointer Exception. if (count == card_multiplier*52) { // A message shows up when all cards are draw out. Toast.makeText(getApplicationContext(), "All cards have been used up.", Toast.LENGTH_SHORT).show(); draw_card.setEnabled(false); } else { draw.draw(suit, value, strSuit, strValue, pile, context); count_label.setText(count); //This is where I got force closed error, although "int count" have initialized the number. This was supposed to accept in the setText() method. count++; } } } } Take note that the issues on Null Pointer Exception is the Image View and the Edit Text. I got to test it. Thanks. If you have any info about my question, let me know it frankly.

    Read the article

  • Varnish VCL - Regular Expression Evaluation

    - by Hugues ALARY
    I have been struggling for the past few days with this problem: Basically, I want to send to a client browser a cookie of the form foo[sha1oftheurl]=[randomvalue] if and only if the cookie has not already been set. e.g. If a client browser requests "/page.html", the HTTP response will be like: resp.http.Set-Cookie = "foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD;" then, if the same client request "/index.html", the HTTP response will contain a header: resp.http.Set-Cookie = "foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY;" In the end, the client browser will have 2 cookies: foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY Now, that, is not complicated in itself. The following code does it: import digest; import random; ##This vmod does not exist, it's just for the example. sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); } sub vcl_deliver() { ## We create a cookie on the client browser by creating a "Set-Cookie" header ## In our case the cookie we create is of the form foo[sha1]=[randomvalue] ## e.g for a URL "/page.html" the cookie will be foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=[randomvalue] set resp.http.Set-Cookie = {""} + resp.http.Set-Cookie + "foo"+req.http.Url-Sha1+"="+req.http.random-value; } However, this code does not take into account the case where the Cookie already exists. I need to check that the Cookie does not exists before generating a random value. So I thought about this code: import digest; import random; sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); set req.http.regex = "abtest"+req.http.Url-Sha1; if(!req.http.Cookie ~ req.http.regex) { set req.http.random-value = random.get_rand(); } } The problem is that Varnish does not compute Regular expression at run time. Which leads to this error when I try to compile: Message from VCC-compiler: Expected CSTR got 'req.http.regex' (program line 940), at ('input' Line 42 Pos 31) if(req.http.Cookie !~ req.http.regex) { ------------------------------##############--- Running VCC-compiler failed, exit 1 VCL compilation failed One could propose to solve my problem by matching on the "abtest" part of the cookie or even "abtest[a-fA-F0-9]{40}": if(!req.http.Cookie ~ "abtest[a-fA-F0-9]{40}") { set req.http.random-value = random.get_rand(); } But this code matches any cookie starting by 'abtest' and containing an hexadecimal string of 40 characters. Which means that if a client requests "/page.html" first, then "/index.html", the condition will evaluate to true even if the cookie for the "/index.html" has not been set. I found in bug report phk or someone else stating that computing regular expressions was extremely expensive which is why they are evaluated during compilation. Considering this, I believe that there is no way of achieving what I want the way I've been trying to. Is there any way of solving this problem, other than writting a vmod? Thanks for your help! -Hugues

    Read the article

  • Yahoo is sending our server's transactional email to the Spam folder, even though we have set up SPF and DKIM

    - by Derrick Miller
    Yahoo Mail is sending our server's transactional emails to the Spam folder, even though we have taken quite a few anti-spam steps. By contrast, Gmail allows the messages through to the inbox just fine. Here are the things which are in place: SPF is set up for the domain holsteinplaza.com. Yahoo reports spf=pass in the message headers. DKIM is set up for the domain holsteinplaza.com. Yahoo reports dkim=pass in the message headers. We have a proper reverse DNS entry for the sending mail server. Name - IP matches IP - Name. Neither Domainkeys nor SenderID are set up. From what I can tell, DKIM is the way of the future, and there is not much to be gained from adding Domainkeys or SenderID. Following are the headers. Any ideas what more I should do to get Yahoo to stop flagging the emails as spam? From Holstein Plaza Auctions Sat Jun 25 18:30:08 2011 X-Apparently-To: [email protected] via 98.138.90.132; Sat, 25 Jun 2011 18:30:11 -0700 Return-Path: <[email protected]> X-YahooFilteredBulk: 70.32.113.42 Received-SPF: pass (domain of holsteinplaza.com designates 70.32.113.42 as permitted sender) X-YMailISG: i_vaA_QWLDuLOmXhDjUv3aBKJl5Un6EiP6Yk2m4yn3jeEuYK MkhpqIt9zDUbHARCwXrhl9pqjTANurGVca7gytSs.mryWVQcbWBx.DaItWRb VcyrIzwMzXKCSeu06H2a.cJ7HG5vJLJaKmHUUI_1ttXKn_Aegiu5yHvFX83R Lpth0witO9zfaKvOMaJV3LAxpIpFOydwvq1cqjZ8nURxQbxM3Cl.QW7MxxrC 09qLVn_D_xSdU94QdU22IsVmlaRHv.uU5dnIazu.KSkhKpYykDoZA2SH0SY4 JmTZj3LP8N926xXVDzYQ5K6QvKuJL5g0d9pYZx3KC59sgIu5oHlJ3Q15RdKb f3OJw0PR6oIyJ2yStVr8vfbDgOfj3qig03.Tw6g6MMNpv1G7Cuol4oJeUaYP xELxX6dHgBgCSuWMcbsrxbK4BIXcS2qhpMqYQ4Isk.XXyA8uvmFXyvgc1ds5 8jo0rW.Wsw.55Z.KTPaQ0gHXj0T3OGppYMELSJv1iuhPyyAnZpmq01CU0Qd5 CcRgdyW3HaqhmpXqJCS0Clo16zXA4HmAjR0tgIQrHRLc3D9N02AOzvmDgCb1 vCh0p00QeKVq8UNkcShPRxZFKi9khtkLhPBlXEKkhJ76zyDmHUxTY.dQHVVD 8D2hx7BxbqI9DINI8x5oR5Q8hYkZqHYQsmGNkaU77O2BnsEv5WxMEmzrBJ4Z h8zGCidgYPiZycZfnfaBp0Xb4tya2WMTN45W02JFcO1qq_UMJ9xPeqZhPEj. j9YvBAC8324GGF.c8eWcNB2VB34QHgTcVUl3.c0XUCuncls9Cyg4L7AoIdCi HvAklSzDDu9nW6732VEipV9FJ_JkDupDNQU2hfiPG.3OeF8GwTnVYnEn0EiZ aO0NCnZhXuLDcN3K7ml3846yRdASvzPFs9s4aJkzR0FkhVvptiMBEOdRkKdG wHWmvWpK4GTZpW4yU7CnKpW2MiWWn1MP0h_CCZFKs5.3mfmfPjPVIABN_RuU Q8ex5hdKnKlQiqK56LzcPRnYmNtrwdsUX9CYn9d6cPpXR_Bi5jrNJMNzdFvq lGO0CBT4QPe2V45U8PtpMitttuDA1cCvmyBPFswxNlL0jyX0a_W.vl0YW5.d HhDItpHhDxKRUscM28IR.exetq4QCzyM X-Originating-IP: [70.32.113.42] Authentication-Results: mta1267.mail.ac4.yahoo.com from=holsteinplaza.com; domainkeys=neutral (no sig); from=holsteinplaza.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO predator.axis80.com) (70.32.113.42) by mta1267.mail.ac4.yahoo.com with SMTP; Sat, 25 Jun 2011 18:30:11 -0700 Received: (qmail 1440 invoked by uid 48); 25 Jun 2011 21:30:09 -0400 To: [email protected] Subject: this is a test X-PHPMAILER-DKIM: phpmailer.worxware.com DKIM-Signature: v=1; a=rsa-sha1; q=dns/txt; l=203; s=auction; t=1309051808; c=relaxed/simple; h=From:To:Subject; d=holsteinplaza.com; [email protected]; z=From:=20Holstein=20Plaza=20Auctions=20<[email protected]> |To:[email protected] |Subject:=20this=20is=20a=20test; bh=B3Tw5AQb1va627KEoazuFEBZ0fg=; b=oQ5uFq+oekPTGhszyIritjuuIAi3qPNyeitu+aWMhdx3oC6O2j5hJsDFpK0sS5fms7QdnBkBcEzT0iekEvn9EfAdCkGZ2KrtEC0yv7QKQcrjXxy07GJpj9nq0LYbgOuPdw8mGvKxlRZ+jFBX0DRJm0xXFLkr+MEaILw7adHTCCM= Date: Sat, 25 Jun 2011 21:30:08 -0400 From: Holstein Plaza Auctions <[email protected]> Reply-to: Holstein Plaza Auctions <[email protected]> Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="iso-8859-1" Content-Length: 195

    Read the article

  • SNTP, why do you mock me?!

    - by Matthew
    --- SOLVED SEE EDIT 5 --- My w2k3 pdc is configured as an authoritative time server. Other servers on the domain are able to sync with it if I manually specify it in the peer list. By if I try to sync from flags 'domhier', it wont resync; I get the error message The computer did not resync because no time data was available. I can only think that it is not querying the pdc. I also tried setting the registry as shown here (http://support.microsoft.com/kb/193825). But no luck (I have not restarted the server, I am hoping I wont have to since it is the pdc) If you would like any further information on my config, please let me know. Edit 1: I have set the w32time service config AnnouceFlags to 0x05 as documented here www.krr.org/microsoft/authoritative_time_servers.php and a number of other places. The PDC syncs to an external time source (ntp). I can get the stripchart on the client from the pdc no problems. The loginserver for the host I am trying to configure is shown as the pdc. Edit 2: The packet capture has revealed something interesting. The client is contacting the correct server, and getting a valid response but I still get the same error message. Here is the NTP excerpt from the client to the server Flags: 11.. .... = Leap Indicator: alarm condition (clock not synchronized) (3) ..01 1... = Version number: NTP Version 3 (3) .... .011 = Mode: client (3) Peer Clock Stratum: unspecified or unavailable (0) Peer Polling Interval: 10 (1024 sec) Peer Clock Precision: 0.015625 sec Root Delay: 0.0000 sec Root Dispersion: 1.0156 sec Reference Clock ID: NULL Reference Clock Update Time: Sep 1, 2010 05:29:39.8170 UTC Originate Time Stamp: NULL Receive Time Stamp: NULL Transmit Time Stamp: Nov 8, 2010 01:44:44.1450 UTC Key ID: DC080000 Here is the reply NTP excerpt from the server to the client Flags: 0x1c 00.. .... = Leap Indicator: no warning (0) ..01 1... = Version number: NTP Version 3 (3) .... .100 = Mode: server (4) Peer Clock Stratum: secondary reference (3) Peer Polling Interval: 10 (1024 sec) Peer Clock Precision: 0.00001 sec Root Delay: 0.1484 sec Root Dispersion: 0.1060 sec Reference Clock ID: 192.189.54.17 Reference Clock Update Time: Nov 8,2010 01:18:04.6223 UTC Originate Time Stamp: Nov 8, 2010 01:44:44.1450 UTC Receive Time Stamp: Nov 8, 2010 01:46:44.1975 UTC Transmit Time Stamp: Nov 8, 2010 01:46:44.1975 UTC Key ID: 00000000 Edit 3: dumpreg for paramters on pdc Value Name Value Type Value Data ------------------------------------------------------------------------ ServiceMain REG_SZ SvchostEntry_W32Time ServiceDll REG_EXPAND_SZ C:\WINDOWS\system32\w32time.dll NtpServer REG_SZ bhvmmgt01.domain.com,0x1 Type REG_SZ AllSync and config Value Name Value Type Value Data -------------------------------------------------------------------------- LastClockRate REG_DWORD 156249 MinClockRate REG_DWORD 155860 MaxClockRate REG_DWORD 156640 FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 HoldPeriod REG_DWORD 5 LocalClockDispersion REG_DWORD 10 EventLogFlags REG_DWORD 2 PhaseCorrectRate REG_DWORD 7 MinPollInterval REG_DWORD 6 MaxPollInterval REG_DWORD 10 UpdateInterval REG_DWORD 100 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 AnnounceFlags REG_DWORD 5 MaxAllowedPhaseOffset REG_DWORD 300 FileLogSize REG_DWORD 10000000 FileLogName REG_SZ C:\Windows\Temp\w32time.log FileLogEntries REG_SZ 0-300 Edit 4: Here are some notables from the ntp log file on the pdc. ReadConfig: failed. Use default one 'TimeJumpAuditOffset'=0x00007080 DomainHierachy: we are now the domain root. ClockDispln: we're a reliable time service with no time source: LS: 0, TN: 864000000000, WAIT: 86400000 Edit 5: F&^%ING SOLVED! Ok so I was reading about people with similar problems, some mentioned w32time server settings applied by GPO, but I tested this early on and there were no settings applied to this service by gpo. Others said that the reporting software may not be picking up some old gpo settings applied. So I searched the registry for all w32time instaces. I came across an interesting key that indicated there may be some other ntp software running on the server. Sure enough, I look through the installed software list and there the little F*&%ER is. Uninstalled and now working like a dream. FFFFFFFUUUUUUUUUUUU

    Read the article

  • centos6.3 varnish3.03 get the wrong backend

    - by Sola.Shawn
    I install varnish3.03 with yum! I got a problem with it my varnish config bellow:** # #backend weibo { .host = "192.168.1.178"; .port = "8080"; .connect_timeout=20s; .first_byte_timeout=20s; .between_bytes_timeout=20s; } #backend smth { .host = "192.168.1.115"; .port = "8080"; .connect_timeout=20s; .first_byte_timeout=20s; .between_bytes_timeout=20s; } #sub vcl_recv { if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return(pipe); } if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return(pass); } if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return(pass); } if (req.http.host ~ "^(hk.)?weibo.com"){ set req.http.host = "hk.weibo.com"; set req.backend = weibo; } elseif (req.http.host ~ "^(www.)?newsmth.net"){ set req.http.host = "www.newsmth.net"; set req.backend = smth; } else { error 404 "Unknown virtual host"; } return(lookup); } ##sub vcl_pipe { return(pipe); } #sub vcl_pass { return(pass); } #sub vcl_hash { hash_data(req.url); if(req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return(hash); } #sub vcl_hit { if(req.http.Cache-Control~"no-cache"||req.http.Cache-Control~"max-age=0"||req.http.Pragma~"no-cache"){ set obj.ttl=0s; return (restart); } return(deliver); } #sub vcl_miss { return(fetch); } #sub vcl_fetch { if (beresp.ttl <= 120s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { /* * Mark as "Hit-For-Pass" for the next 2 minutes */ set beresp.ttl = 10s; return (hit_for_pass); } return(deliver); } #sub vcl_deliver { return(deliver); } #sub vcl_init { return(ok); } #sub vcl_fini { return(ok); } and my Win7's hosts file add bellow: 192.168.1.178 www.newsmth.net 192.168.1.178 hk.weibo.com start varnish varnishd -f /etc/varnish/dd.vcl -s malloc,100M -a 0.0.0.0:8000 -T 0.0.0.0:3500<br> but when I access the "hk.weibo.com:8000" it fine, and got: Hello,I am hk.weibo.com! but when access http://www.newsmth.net:8000/, got: Hello,I am hk.weibo.com! <br> My question is why it isn't "Hello,I am www.newsmth.net!"? varnish fetched the content from the wrong backend. Does anyone know how to fix this?

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • Nginx1.6.1: Installing from source not running corectly

    - by Maca
    I just installed Nginx 1.6.1 from source but it didn't seem to installed correctly. Nginx is running if I service nginx status but when I do nginx -v it outputs command not found. Regular HTML page shows fine and there is no error in the error logs. I am on AWS Ec2 linux AMI. Here is my /etc/init.d/nginx script #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /usr/local/nginx/logs/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/usr/local/nginx/logs/nginx.lock make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

    Read the article

  • Gmail rejects emails. Openspf.net fails the tests

    - by pablomedok
    I've got a problem with Gmail. It started after one of our trojan infected PCs sent spam for one day from our IP address. We've fixed the problem, but we got into 3 black lists. We've fixed that, too. But still every time we send an email to Gmail the message is rejected: So I've checked Google Bulk Sender's guide once again and found an error in our SPF record and fixed it. Google says everything should become fine after some time, but this doesn't happen. 3 weeks already passed but we still can't send emails to Gmail. Our MX setup is a bit complex, but not too much: We have a domain name delo-company.com, it has it's own mail @delo-company.com (this one is fine, but the problems are with sub-domain name corp.delo-company.com). Delo-company.com domain has several DNS records for the subdomain: corp A 82.209.198.147 corp MX 20 corp.delo-company.com corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" (I set ~all for testing purposes only, it was -all before that) These records are for our corporate Exchange 2003 server at 82.209.198.147. Its LAN name is s2.corp.delo-company.com so its HELO/EHLO greetings are also s2.corp.delo-company.com. To pass EHLO check we've also created some records in delo-company.com's DNS: s2.corp A 82.209.198.147 s2.corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" As I understand SPF verifications should be passed in this way: Out server s2 connects to MX of the recepient (Rcp.MX): EHLO s2.corp.delo-company.com Rcp.MX says Ok, and makes SPF check of HELO/EHLO. It does NSlookup for s2.corp.delo-company.com and gets the above DNS-records. TXT records says that s2.corp.delo-company.com should be only from IP 82.209.198.147. So it should be passed. Then our s2 server says RCPT FROM: Rcp.MX` server checks it, too. The values are the same so they should also be positive. Maybe there is also a rDNS check, but I'm not sure what is checked HELO or RCPT FROM. Our PTR record for 82.209.198.147 is: 147.198.209.82.in-addr.arpa. 86400 IN PTR s2.corp.delo-company.com. To me everything looks fine, but anyway all emails are rejected by Gmail. So, I've checked MXtoolbox.com - it says everything is fine, I passed http://www.kitterman.com/spf/validate.html Python check, I did 25port.com email test. It's fine, too: Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (82.209.198.147) by verifier.port25.com id ha45na11u9cs for <[email protected]>; Fri, 2 Mar 2012 13:03:21 -0500 (envelope-from <[email protected]>) Authentication-Results: verifier.port25.com; spf=pass [email protected] Authentication-Results: verifier.port25.com; domainkeys=neutral (message not signed) [email protected] Authentication-Results: verifier.port25.com; dkim=neutral (message not signed) Authentication-Results: verifier.port25.com; sender-id=pass [email protected] Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCF89E.BE02A069" Subject: test Date: Fri, 2 Mar 2012 21:03:15 +0300 X-MimeOLE: Produced By Microsoft Exchange V6.5 Message-ID: <[email protected]> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: test Thread-Index: Acz4jS34oznvbyFQR4S5rXsNQFvTdg== From: =?koi8-r?B?89XQ0tXOwMsg8MHXxcw=?= <[email protected]> To: <[email protected]> I also checked with [email protected], but it FAILs all the time, no matter which SPF records I make: <s2.corp.delo-company.com #5.7.1 smtp;550 5.7.1 <[email protected]>: Recipient address rejected: SPF Tests: Mail-From Result="softfail": Mail From="[email protected]" HELO name="s2.corp.delo-company.com" HELO Result="softfail" Remote IP="82.209.198.147"> I've filled Gmail form twice, but nothing happens. We do not send spam, only emails for our clients. 2 or 3 times we did mass emails (like New Year Greetings and sales promos) from corp.delo-company.com addresses, but they where all complying to Gmail Bulk Sender's Guide (I mean SPF, Open Relays, Precedence: Bulk and Unsubscribe tags). So, this should be not a problem. Please, help me. What am I doing wrong? UPD: I also tried Unlocktheinbox.com test and the server also fails this test. Here is the result: http://bit.ly/wYr39h . Here is one more http://bit.ly/ypWLjr I also tried to send email from that server manually via telnet and everything is fine. Here is what I type: 220 mx.google.com ESMTP g15si4811326anb.170 HELO s2.corp.delo-company.com 250 mx.google.com at your service MAIL FROM: <[email protected]> 250 2.1.0 OK g15si4811326anb.170 RCPT TO: <[email protected]> 250 2.1.5 OK g15si4811326anb.170 DATA 354 Go ahead g15si4811326anb.170 From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test . 250 2.0.0 OK 1330795021 g15si4811326anb.170 QUIT 221 2.0.0 closing connection g15si4811326anb.170 And this is what I get: Delivered-To: [email protected] Received: by 10.227.132.73 with SMTP id a9csp96864wbt; Sat, 3 Mar 2012 09:17:02 -0800 (PST) Received: by 10.101.128.12 with SMTP id f12mr4837125ann.49.1330795021572; Sat, 03 Mar 2012 09:17:01 -0800 (PST) Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (s2.corp.delo-company.com. [82.209.198.147]) by mx.google.com with SMTP id g15si4811326anb.170.2012.03.03.09.15.59; Sat, 03 Mar 2012 09:17:00 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) client-ip=82.209.198.147; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) [email protected] Date: Sat, 03 Mar 2012 09:17:00 -0800 (PST) Message-Id: <[email protected]> From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Convert a DVD Movie Directly to AVI with FairUse Wizard 2.9

    - by DigitalGeekery
    Are you looking for a way to backup your DVD movie collection to AVI?  Today we’ll show you how to rip a DVD movie directly to AVI with FairUse Wizard. About FairUse Wizard FairUse Wizard 2.9 uses the DivX, Xvid, or h.264 codec to convert DVD to an AVI file. It comes in both a free version and commercial version. The free, or “Light” version, can create files up 700MB while the commercial version can output a 1400MB file. This will allow you to back up your movies to CD, or even multiple movies on a single DVD. FairUse Wizard states that it does not work on copy protected discs, but we’ve seen it work on all but some of the most recent copy protection. For this tutorial we’re using the free Light Edition to convert a DVD to AVI. They also offer a commercial version that you can get for $29.99 and it offers even more encoding possibilities for converting video to you portable digital devices. Installation and Configuration Download and install FairUse Wizard. (Download link below). Once the install is complete, open FairUse Wizard by going to Start > All Programs >  FairUse Wizard 2 >  FairUse Wizard 2.   FairUse Wizard will open on the new project screen. Select “Create a new project” and type a project name into the text box. This will be used as the file output name.  Ex: A project name of Simpsons Movie will give you an output file of Simpsons Movie.avi.   Next, browse for a destination folder for the output file and temp files. Note that you will need a minimum of 6 GB of free disk space for the conversion process. Note: Much of that 6 GB will be used for temporary files that we will delete after the conversion process.   Click on the Options button at the bottom.   Under Preferences, choose your preferred video codec and file output size. XviD and x264 are installed by default. If you prefer to use DivX, you will have to install it separately. Also note the “Two pass” option. Checking the “Two pass” box will encode your video twice for higher quality, but will take more time. Un-checking the box will speed up the conversion process.   Under Audio track, note that English subtitles are enabled by default, so to remove the subtitles, you will need to change the dropdown list so it shows only a dash (-). You can also select “Use TV Mode” if your primary playback will be on a 4:3 TV screen. Click “Next.” Full Auto Mode vs. Manual Mode You should now be back to the initial screen. Next, we’ll need to determine whether or not we can use “Full Auto Mode” to convert the movie. The difference is that “Full Auto Mode” will automatically perform a few steps that you will otherwise have to do manually. If you choose the “Full Auto Mode” option, FairUse Wizard will look for the video on the DVD with the longest duration and assume it is the chain that it should convert to AVI. It’s possible, however, your disc may contain a few chains of similar size, such as a theatrical cut and director’s cut, and the longest chain may not be the one you wish to convert. Make sure that “Full auto mode” is not checked yet, and click “Next.”   FairUse Wizard will parse the IFO files and display all video chains longer than 60  seconds. In most cases, you will only find that the largest chain is the one closely matching the duration of the movie. In these instances, you can use “Full Auto Mode.” If you find more than one chain that are close in duration to the length of the movie, consult the literature on the DVD case, or search online, to find the actual running time of the movie. If the proper file chain is not the longest chain, you won’t be able to use “Full Auto Mode.”   Full Auto Mode To use “Full Auto Mode,” simply click the “Back” button to return to the initial screen Now, place a check in the “Full auto mode” check box. Click “Next.” You will then be prompted to chose your DVD drive, then click “OK.” FairUse Wizard will parse the IFO files… … and then prompt you to Select your drive that contains the DVD one more time before beginning the conversion process. Click “OK.”   Manual Mode If you cannot (or don’t wish to) use Full Auto Mode, choose the appropriate video chain and click “Next.” FairUse Wizard will first go through the process of indexing the video. Note: If you get a runtime error during this portion of the process, it likely means that FairUse Wizard cannot handle the copy protection, and thus cannot convert the DVD. FairUse Wizard will automatically detect a cropping region. If necessary, you can edit the cropping region by adjusting the cropping region settings to the left. Click “Next.” Next, click “Auto Detect” to choose the proper field combination. Click “OK” on the pop up window that displays your Field Mode. Then click “Next.” This next screen is mainly comprised of settings from the Options screen. You can make changes at this point such as codec or output size. Click “Next” when ready.   Video Conversion Now the video conversion process will begin. This may take a few hours depending on your system’s hardware. Note: There is a check box to “Shutdown computer when done” if you choose to run the conversion overnight or before leaving for work. The first phase will be video encoding… Then the audio… If you chose the “Two Pass” option, your video video will be encoded again on 2nd pass. Then you’re finished. Unfortunately, FairUse Wizard doesn’t clean up after itself very well. After the process is complete, you’ll want to browse to your output directory and delete all the temporary files as they take up a considerable amount of hard drive space. Now you’re ready to enjoy your movie. Conclusion FairUse Wizard is a nice way to backup your DVD movies to good quality .avi files. You can store them on your hard drive, watch them on a media PC, or burn them to disc. Many DVD players even allow for playback of DivX or XviD encoded video from a CD or DVD. For those of you with children, you can burn that AVI file to CD for your kids, and keep your original DVDs stored safely out of harms way. Download Download FairUse Wizard 2.9 LE Similar Articles Productive Geek Tips Kantaris is a Unique Media Player Based on VLCHow to Make/Edit a movie with Windows Movie Maker in Windows VistaAutomatically Mount and View ISO files in Windows 7 Media CenterTune Your ClearType Font Settings in Windows VistaAdd Images and Metadata to Windows 7 Media Center Movie Library TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Earthquake Locator - Live Demo and Source Code

    - by Bobby Diaz
    Quick Links Live Demo Source Code I finally got a live demo up and running!  I signed up for a shared hosting account over at discountasp.net so I could post a working version of the Earthquake Locator application, but ran into a few minor issues related to RIA Services.  Thankfully, Tim Heuer had already encountered and explained all of the problems I had along with solutions to these and other common pitfalls.  You can find his blog post here.  The ones that got me were the default authentication tag being set to Windows instead of Forms, needed to add the <baseAddressPrefixFilters> tag since I was running on a shared server using host headers, and finally the Multiple Authentication Schemes settings in the IIS7 Manager.   To get the demo application ready, I pulled down local copies of the earthquake data feeds that the application can use instead of pulling from the USGS web site.  I basically added the feed URL as an app setting in the web.config:       <appSettings>         <!-- USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/ -->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="~/Demo/1day-M2.5.xml" />-->         <add key="FeedUrl"              value="~/Demo/7day-M2.5.xml" />     </appSettings> You will need to do the same if you want to run from local copies of the feed data.  I also made the following minor changes to the EarthquakeService class so that it gets the FeedUrl from the web.config:       private static readonly string FeedUrl = ConfigurationManager.AppSettings["FeedUrl"];       /// <summary>     /// Gets the feed at the specified URL.     /// </summary>     /// <param name="url">The URL.</param>     /// <returns>A <see cref="SyndicationFeed"/> object.</returns>     public static SyndicationFeed GetFeed(String url)     {         SyndicationFeed feed = null;           if ( !String.IsNullOrEmpty(url) && url.StartsWith("~") )         {             // resolve virtual path to physical file system             url = System.Web.HttpContext.Current.Server.MapPath(url);         }           try         {             log.Debug("Loading RSS feed: " + url);               using ( var reader = XmlReader.Create(url) )             {                 feed = SyndicationFeed.Load(reader);             }         }         catch ( Exception ex )         {             log.Error("Error occurred while loading RSS feed: " + url, ex);         }           return feed;     } You can now view the live demo or download the source code here, but be sure you have WCF RIA Services installed before running the application locally and make sure the FeedUrl is pointing to a valid location.  Please let me know if you have any comments or if you run into any issues with the code.   Enjoy!

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >