Search Results

Search found 13955 results on 559 pages for 'date comparison'.

Page 188/559 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • What is the best way to include a php file as a template?

    - by Jon
    I have simple template that's html mostly and then pulls some stuff out of SQL via PHP and I want to include this template in three different spots of another php file. What is the best way to do this? Can I include it and then print the contents? Example of template: Price: <?php echo $price ?> and, for example, I have another php file that will show the template file only if the date is more than two days after a date in SQL.

    Read the article

  • php strtotime() some help

    - by sea_1987
    Hi there, I am taking credit card details and I am taking the expiration date in two form field, one for the expiration month and one for the expiration year, I am wanting to store the expiration date as timestamp. Will strtotime("05/2010") create a time stamp or do I need to pass a day as well or is there an alternative? Thanks

    Read the article

  • SQL more elegant combination of boolean checks possible?

    - by Matze
    Call me pedantic but is there a more elegant way to combine all those checks? SELECT * FROM [TABLE1] WHERE [path] = 'RECEIVE' AND [src_ip] NOT LIKE '10.48.20.10' AND [src_ip] NOT LIKE '0.%' AND [src_ip] NOT LIKE '127.%' ORDER BY [date],[time] DESC; To something like this: SELECT * FROM [TABLE1] WHERE [path] = 'RECEIVE' AND [src_ip] NOT LIKE IN ('10.48.20.10','0.%','127.%', .... ) ORDER BY [date],[time] DESC;

    Read the article

  • how to display one input text box..

    - by kumar
    hello, <input type="text" id="Date-<%=Model.ID%>" /><%=Html.EditorFor(model => model.Date)%> if I use this I am getting two input boxes on front end? Can any body help me out.. how to get only one textbox,.. thanks

    Read the article

  • how to validate

    - by kumar
    I have this Calender Control I am using..user can select any date from the calender.. I need to validate the dates like Saturday and Sundays and 1/1 and 1/25.. if they select these days I need to show them Popup message not valid date? can anybody help me out.. thanks

    Read the article

  • Mysql ADDDATE() or DATE_ADD() with table columns?

    - by John
    How do I do something like the following SELECT ADDDATE(t_invoice.last_bill_date, INTERVAL t_invoice.interval t_invoice.interval_unit) FROM t_invoice ...where the column t_invoice.last_bill_date is a date, t_invoice.interval is an integer and t_invoice.interval_unit is a string. Basically, I want to determine a person's next bill date based on his desired billing frequency. Is there a better way to achieve this?

    Read the article

  • How to compare the year with the current year in iphone?

    - by Warrior
    I am new to iphone development.I am parsing a XML URL and display the title ,date and summary in a table view.I noticed some of the date were very old like "Wed, 31 Dec 1969 19:00:00 -0500" ,So i don't want to display the dates which are 1 year older than the current year.How to do that? I used the sample code from this site for parsing and display the details.Please help me out.Thanks.

    Read the article

  • How can I get reports for last 24h in SQL?

    - by newbie
    I need to get all reports made in last 24h, table has CreatedDate column, so I need to check in database that report was created in last 24h. I know I can use getdate() to get current date, but how can I minus 24h from that attrbiute and then compare that date with CretedDate?

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Windows XP - Security Update for Windows XP (KB923561) (KB946648) (KB956572) (KB958644)

    - by leeand00
    My father's computer has Windows XP, but when I try to install the service packs it always fails. What gives? Here are the errors that I get in the event log: Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB946648). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 38 33 44 31 41 44 ={83D1AD 0028: 46 35 2d 37 37 39 44 2d F5-779D- 0030: 34 30 31 36 2d 38 43 33 4016-8C3 0038: 31 2d 35 34 39 32 37 30 1-549270 0040: 46 36 37 42 33 46 7d 20 F67B3F} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 34 20 00 04 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Catagory: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB956572). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 44 46 32 46 30 41 ={DF2F0A 0028: 39 38 2d 36 45 33 35 2d 98-6E35- 0030: 34 33 37 39 2d 41 42 33 4379-AB3 0038: 33 2d 41 30 33 30 33 45 3-A0303E 0040: 46 37 34 42 32 41 7d 20 F74B2A} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 32 20 00 02 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer EVO Source: Windows Update Agent Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB958644). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 39 33 39 37 41 32 ={9397A2 0028: 31 46 2d 32 34 36 43 2d 1F-246C- 0030: 34 35 33 42 2d 41 43 30 453B-AC0 0038: 35 2d 36 35 42 46 34 46 5-65BF4F 0040: 43 36 42 36 38 42 7d 20 C6B68B} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 31 20 00 01 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB923561). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 33 31 30 41 34 43 ={310A4C 0028: 30 38 2d 35 39 33 44 2d 08-593D- 0030: 34 31 41 33 2d 42 42 35 41A3-BB5 0038: 37 2d 38 33 42 33 38 36 7-83B386 0040: 44 37 37 33 42 35 7d 20 D773B5} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 33 20 00 03 . Thank you, Andrew

    Read the article

  • Can't logging in file from tomcat6 with log4j

    - by Ivan Nakov
    I have one stupid problem, which is killing me from hours. I'm trying to configure loggin to my project. I started with a simple Spring MVC project generated by STS, then added org.apache.log4j.RollingFileAppender to the existing log4j.xml file. <?xml version="1.0" encoding="UTF-8"?> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> <appender name="FilleAppender" class="org.apache.log4j.RollingFileAppender"> <param name="maxFileSize" value="100KB" /> <param name="maxBackupIndex" value="2" /> <param name="File" value="/home/ivan/Desktop/app.log" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p %c{1}: %m%n " /> </layout> </appender> <!-- Application Loggers --> <logger name="org.elsys.logger"> <level value="debug" /> </logger> <!-- 3rdparty Loggers --> <logger name="org.springframework.core"> <level value="info" /> </logger> <logger name="org.springframework.beans"> <level value="info" /> </logger> <logger name="org.springframework.context"> <level value="info" /> </logger> <logger name="org.springframework.web"> <level value="info" /> </logger> <!-- Root Logger --> <root> <priority value="debug" /> <appender-ref ref="FilleAppender" /> </root> When I deploy project to tomcat6 server and open the url, logger doesn't generate log file. I'm trying to log from this controller: @Controller public class HomeController { private static final Logger logger = LoggerFactory.getLogger(HomeController.class); /** * Simply selects the home view to render by returning its name. */ @RequestMapping(value = "/", method = RequestMethod.GET) public String home(Locale locale, Model model) { logger.info("Welcome home! the client locale is "+ locale.toString()); Date date = new Date(); DateFormat dateFormat = DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.LONG, locale); String formattedDate = dateFormat.format(date); logger.debug("send view"); model.addAttribute("serverTime", formattedDate ); return "home"; } } When I log from this simple Main.class, it works correct. public class Main { public static void main(String[] args) { Logger log = LoggerFactory.getLogger(Main.class); log.debug("Test"); } } I'm using tomcat6 and Ubuntu 11.10. I made a research in net and i found various options to fix this problem, but they don't help me. Please if someone have ideas how to fix it, help me.

    Read the article

  • Nginx error page with JSON response

    - by Waseem
    I'm trying to serve a maintenance page to clients making request to my application when it is under maintenance. Following is my nginx configuration for that purpose. server { recursive_error_pages on; listen 80; ... if (-f $document_root/maintenance.html) { return 503; } error_page 404 /404.html; error_page 500 502 504 /500.html; error_page 503 @503; location = /404.html { root $document_root; } location = /500.html { root $document_root; } location @503 { error_page 405 =/maintenance.html; if (-f $request_filename) { break; } rewrite ^(.*)$ /maintenance.html break; } } Lets say I have enabled maintenance of my site by creating a $document_root/maintenance.html. This file, correctly, is served when a user makes a request with with Accept header of text/html. $ curl http://server.com/ -i -v -X GET -H "Accept: text/html" * Adding handle: conn: 0xf89420 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0xf89420) send_pipe: 1, recv_pipe: 0 * About to connect() to server.com port 80 (#0) * Trying xxx.xxx.xxx.xxx... * Connected to server.com (xxx.xxx.xxx.xxx) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.33.0 > Host: server.com > Accept: text/html > < HTTP/1.1 503 Service Temporarily Unavailable HTTP/1.1 503 Service Temporarily Unavailable * Server nginx/1.1.19 is not blacklisted < Server: nginx/1.1.19 Server: nginx/1.1.19 < Date: Thu, 14 Nov 2013 11:16:16 GMT Date: Thu, 14 Nov 2013 11:16:16 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 27 Content-Length: 27 < Connection: keep-alive Connection: keep-alive < This is under maintenance. * Connection #0 to host server.com left intact Now some clients set Accept header to application/json. How do I send them a JSON response instead of maintenance.html? Following is the response that I get when setting Accept to application/json. $ curl http://server.com/ -i -v -X GET -H "Accept: application/json" * Adding handle: conn: 0x190c430 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x190c430) send_pipe: 1, recv_pipe: 0 * About to connect() to server.com port 80 (#0) * Trying xxx.xxx.xxx.xxx... * Connected to server.com (xxx.xxx.xxx.xxx) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.33.0 > Host: server.com > Accept: application/json > < HTTP/1.1 503 Service Temporarily Unavailable HTTP/1.1 503 Service Temporarily Unavailable * Server nginx/1.1.19 is not blacklisted < Server: nginx/1.1.19 Server: nginx/1.1.19 < Date: Thu, 14 Nov 2013 11:15:50 GMT Date: Thu, 14 Nov 2013 11:15:50 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 27 Content-Length: 27 < Connection: keep-alive Connection: keep-alive < This is under maintenance. * Connection #0 to host server.com left intact

    Read the article

  • Exchange 2003 mail non-delivery (NDR), spam activity? events 7002 & 7004

    - by HighTechGeek
    Windows Server 2003 Small Business Server SP2 Exchange Version 6.5 (Build 7638.2: Service Pack 2) This network has been neglected and has been having email problems for years and was on many blacklists. I was called in after the server eventually crashed... I got the server back up and running, but email problems persist. Outgoing mail delivery is sporadic. Sometimes the mail goes through, sometimes a delayed delivery report is generated after a day or more, and sometimes it seems to go through, but the recipient never receives it. Not sure if spammers are successfully using the server as a relay (see event entries below after turning on maximum SMTP logging)... User PCs infected with viruses and server was blacklisted on many sites (I used mxtoolbox.com) I have cleaned all the PCs and changed all passwords (including administrator) I have requested removal from all of the blacklists - most have removed the listing, some take more time. I have setup rDNS pointer records with the ISP (Comcast) - that was one reason for some of the blacklistings. I have tested that it's not an open relay using telnet as described here: www.amset.info/exchange/smtp-openrelay.asp I followed the advise of a Spamhaus & Microsoft article to enable maximum SMTP logging. http://www.spamhaus.org/faq/answers.lasso?section=isp%20spam%20issues#320 which directed me to Microsoft KB article 895853, specifically, the part 2/3 down titled: "If mail relay occurs from an account on an Exchange computer that is not configured as an open relay" . The Application Event Log is filling with this type of activity (Event ID 7002, 7002 & 3018 errors): Event Type: Error Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7004 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol error log for virtual server ID 1, connection #621. The remote host "212.52.84.180", responded to the SMTP command "rcpt" with "550 #5.1.0 Address rejected [email protected] ". The full command sent was "RCPT TO: ". This will probably cause the connection to fail. and this: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #620. The remote host "212.52.84.170", responded to the SMTP command "rcpt" with "452 Too many recipients received this hour ". The full command sent was "RCPT TO: ". This may cause the connection to fail. or a variant of: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 8:39:21 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #661. The remote host "82.57.200.133", responded to the SMTP command "rcpt" with "421 Service not available - too busy ". The full command sent was "RCPT TO: ". This may cause the connection to fail. also Event Type: Error Event Source: MSExchangeTransport Event Category: NDR Event ID: 3018 Date: 1/18/2011 Time: 9:49:37 AM User: N/A Computer: SERVER Description: A non-delivery report with a status code of 5.4.0 was generated for recipient rfc822;[email protected] (Message-ID ). Causes: This message indicates a DNS problem or an IP address configuration problem Solution: Check the DNS using nslookup or dnsq. Verify the IP address is in IPv4 literal format. Data: 0000: ef 02 04 c0 ï..À Any guidance and/or suggestions and/or tests to perform would be greatly appreciated.

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How mod_cache working with "must-revalidate" and "max-age"?

    - by Dmitriy Sosunov
    Quick question before I will explain my flow: ?an mod_cache perform revalidate with if-none-match only if max-age is expired in case if it configured in reverse proxy mode? My goal is to reduce a number of revalidation requests to our the origin server. For instance: The first request goes to the origin server and then mod_cache save a response in to the cache according to header cache-control: max-age. And only when max-age is expired then mod_cache will revalidate with if-none-match. Currently, mod_cache revalidate each request, regardless that max-age is defined or not. My configuration of Apache 2.4.3 (Windows), on linux I see the same behavior that I will show below. ServerName proxy.lo ProxyRequests Off ProxyPreserveHost Off Header set Vary "Accept, Content-Type, Content-Encoding, Accept-Language" RequestHeader set X-Forwarded-Proto "http" # modify header for user agent's Header set Cache-Control "private, no-cache, no-store, no-transform" CacheQuickHandler off CacheDefaultExpire 300 # the origin server do not provide last-modified CacheIgnoreNoLastMod On CacheIgnoreCacheControl On # the origin server define cache-control: private, no-store only for user agents # Therefore, I would like ignore those headers on the proxy server. CacheStorePrivate On CacheStoreNoStore On CacheEnable disk / CacheRoot "C:/Apache.Cache" CacheDirLevels 5 CacheDirLength 4 CacheMinExpire 15 CacheDetailHeader on CacheHeader on KeepAlive Off ProxyPass / http://origin.lo/ ProxyPassReverse / http://origin.lo/ Also, I have turned on debug log level to see how mod_cache handles a content for caching: I provided this to show that mod_proxy always decides that a content isn't fresh. Why?I provided this to show that mod_proxy always decide that a content isn't fresh. Why? max-age was provided (see below). [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(624): [client 192.168.1.100:63741] AH00698: cache: Key for entity /testpage?(null) is http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(569): [client 192.168.1.100:63741] AH00709: Recalled cached URL info header http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(865): [client 192.168.1.100:63741] AH00720: Recalled headers for URL http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(320): [client 192.168.1.100:63741] AH00695: Cached response for /testpage isn't fresh. Adding/replacing conditional request headers. [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(414): [client 192.168.1.100:63741] AH00757: Adding CACHE_SAVE filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(448): [client 192.168.1.100:63741] AH00759: Adding CACHE_REMOVE_URL filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] mod_proxy.c(1068): [client 192.168.1.100:63741] AH01143: Running scheme http handler (attempt 0) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1976): AH00942: HTTP: has acquired connection for (origin.lo) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2029): [client 192.168.1.100:63741] AH00944: connecting http://origin.lo/testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2151): [client 192.168.1.100:63741] AH00947: connected /testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2554): AH00962: HTTP: connection complete to 192.168.1.100:80 (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1991): AH00943: http: has released connection for (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [headers:debug] [pid 6492:tid 1400] mod_headers.c(800): AH01502: headers: ap_headers_output_filter() [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1190): [client 192.168.1.100:63741] AH00769: cache: Caching url: /testpage [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1196): [client 192.168.1.100:63741] AH00770: cache: Removing CACHE_REMOVE_URL filter. [Sun Nov 04 11:58:42.904890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(1318): [client 192.168.1.100:63741] AH00737: commit_entity: Headers and body for URL http://proxy.lo/testpage? cached. The first request to the origin server without mod_proxy to http://origin.lo/ GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The first response from the origin without mod_proxy HTTP/1.1 200 OK Cache-Control: must-revalidate, proxy-revalidate, max-age=30 Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:11:01 GMT Content-Length: 1877 So, I assumed that revalidation must be occur only in 30 seconds after the success response. Is't right? Let's check it:) Within 30 sec, the Google Chrome didn't perform any requests to the origin server to revalidate a request and has return the response from local cache. When max-age is expired, the Google Chrome perform a request to revalidate: GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/xml If-None-Match: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and response: HTTP/1.1 304 Not Modified Cache-Control: must-revalidate, proxy-revalidate, max-age=30 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:16:20 GMT As you can see, all works as expected. User agent revalidates request only when max-age is expired. Let's now try perform the folling flow though mod_proxy (see configuration above). The first request: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the response was: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:23:36 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: MISS from proxy.lo X-Cache-Detail: "cache miss: attempting entity save" from proxy.lo Connection: close Ok, let's see to the disk cache and try to see how request and response was stored. (I cut binary data) http://proxy.lo/testpage? Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Date: Sun, 04 Nov 2012 10:27:15 GMT Content-Length: 1932 Vary: Accept, Content-Type, Content-Encoding, Accept-Language Host: proxy.lo User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 X-Forwarded-Proto: http Cache-Control: max-age=300, must-revalidate X-Forwarded-For: 192.168.1.100 X-Forwarded-Host: proxy.lo X-Forwarded-Server: origin.lo Ok, what we see? We see that the first request was performed with max-age=300 & must-revalidate Ok, looks good, as for me, lets perform the next call: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the second response from mod_proxy: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:31:58 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: REVALIDATE from proxy.lo X-Cache-Detail: "conditional cache hit: entity refreshed" from proxy.lo Connection: close Content-Type: application/json; charset=utf-8 SO, MY QUESTION IS: WHY mod_proxy perform revalidation on each request regardless that max-age is defined? N.B. Apache 2.4.3 Thanks, I would be grateful for any help.

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • Binding data to subgrid

    - by bhargav
    i have a jqgrid with a subgrid...the databinding is done in javascript like this <script language="javascript" type="text/javascript"> var x = screen.width; $(document).ready(function () { $("#projgrid").jqGrid({ mtype: 'POST', datatype: function (pdata) { getData(pdata); }, colNames: ['Project ID', 'Due Date', 'Project Name', 'SalesRep', 'Organization:', 'Status', 'Active Value', 'Delete'], colModel: [ { name: 'Project ID', index: 'project_id', width: 12, align: 'left', key: true }, { name: 'Due Date', index: 'project_date_display', width: 15, align: 'left' }, { name: 'Project Name', index: 'project_title', width: 60, align: 'left' }, { name: 'SalesRep', index: 'Salesrep', width: 22, align: 'left' }, { name: 'Organization:', index: 'customer_company_name:', width: 56, align: 'left' }, { name: 'Status', index: 'Status', align: 'left', width: 15 }, { name: 'Active Value', index: 'Active Value', align: 'left', width: 10 }, { name: 'Delete', index: 'Delete', align: 'left', width: 10}], pager: '#proj_pager', rowList: [10, 20, 50], sortname: 'project_id', sortorder: 'asc', rowNum: 10, loadtext: "Loading....", subGrid: true, shrinkToFit: true, emptyrecords: "No records to view", width: x - 100, height: "100%", rownumbers: true, caption: 'Projects', subGridRowExpanded: function (subgrid_id, row_id) { var subgrid_table_id, pager_id; subgrid_table_id = subgrid_id + "_t"; pager_id = "p_" + subgrid_table_id; $("#" + subgrid_id).html("<table id='" + subgrid_table_id + "' class='scroll'></table><div id='" + pager_id + "' class='scroll'></div>"); jQuery("#" + subgrid_table_id).jqGrid({ mtype: 'POST', postData: { entityIndex: function () { return row_id } }, datatype: function (pdata) { getactionData(pdata); }, height: "100%", colNames: ['Event ID', 'Priority', 'Deadline', 'From Date', 'Title', 'Status', 'Hours', 'Contact From', 'Contact To'], colModel: [ { name: 'Event ID', index: 'Event ID' }, { name: 'Priority', index: 'IssueCode' }, { name: 'Deadline', index: 'IssueTitle' }, { name: 'From Date', index: 'From Date' }, { name: 'Title', index: 'Title' }, { name: 'Status', index: 'Status' }, { name: 'Hours', index: 'Hours' }, { name: 'Contact From', index: 'Contact From' }, { name: 'Contact To', index: 'Contact To' } ], caption: "Action Details", rowNum: 10, pager: '#actionpager', rowList: [10, 20, 30, 50], sortname: 'Event ID', sortorder: "desc", loadtext: "Loading....", shrinkToFit: true, emptyrecords: "No records to view", rownumbers: true, ondblClickRow: function (rowid) { } }); jQuery("#actiongrid").jqGrid('navGrid', '#actionpager', { edit: false, add: false, del: false, search: false }); } }); jQuery("#projgrid").jqGrid('navGrid', '#proj_pager', { edit: false, add: false, del: false, excel: true, search: false }); }); function getactionData(pdata) { var project_id = pdata.entityIndex(); var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var checkOnlyContact = document.getElementById('ctl00_ContentPlaceHolder2_chkOnlyContact').checked; var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pdata.rows; var npage = pdata.page; var sortindex = pdata.sidx; var sortdir = pdata.sord; var path = "project_brow.aspx/GetActionDetails" $.ajax({ type: "POST", url: path, data: "{'project_id': '" + project_id + "','ChannelContact': '" + ChannelContact + "','HideCompleted': '" + HideCompleted + "','Scm': '" + Scm + "','checkOnlyContact': '" + checkOnlyContact + "','MerchantId': '" + MerchantId + "','nrows': '" + nrows + "','npage': '" + npage + "','sortindex': '" + sortindex + "','sortdir': '" + sortdir + "'}", contentType: "application/json; charset=utf-8", success: function (data, textStatus) { if (textStatus == "success") obj = jQuery.parseJSON(data.d) ReceivedData(obj); }, error: function (data, textStatus) { alert('An error has occured retrieving data!'); } }); } function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } function getData(pData) { var dtDateFrom = document.getElementById('ctl00_ContentPlaceHolder2_dtDateFrom_textBox').value; var dtDateTo = document.getElementById('ctl00_ContentPlaceHolder2_dtDateTo_textBox').value; var Status = document.getElementById('ctl00_ContentPlaceHolder2_ddlStatus').value; var Type = document.getElementById('ctl00_ContentPlaceHolder2_ddlType').value; var Channel = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannel').value; var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var Customers = document.getElementById('ctl00_ContentPlaceHolder2_txtCustomers').value; var KeywordSearch = document.getElementById('ctl00_ContentPlaceHolder2_txtKeywordSearch').value; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var SelectedCustomerId = document.getElementById("<%=hdnSelectedCustomerId.ClientID %>").value var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pData.rows; var npage = pData.page; var sortindex = pData.sidx; var sortdir = pData.sord; PageMethods.GetProjectDetails(SelectedCustomerId, Customers, KeywordSearch, MerchantId, Channel, Status, Type, dtDateTo, dtDateFrom, ChannelContact, HideCompleted, Scm, nrows, npage, sortindex, sortdir, AjaxSucceeded, AjaxFailed); } function AjaxSucceeded(data) { var obj = jQuery.parseJSON(data) if (obj != null) { if (obj.records!="") { ReceivedClientData(obj); } else { alert('No Data Available to Display') } } } function AjaxFailed(data) { alert('An error has occured retrieving data!'); } function ReceivedClientData(data) { var thegrid = jQuery("#projgrid")[0]; thegrid.addJSONData(data); } </script> as u can see projgrid is my parent grid and action grid is my subgrid to be shown onclicking the '+' symbol Projgrid is binded and being displayed but when it comes to subgrid im able to get the data but the problem comes at the time of binding data to subgrid which is done in function named ReceivedData where you can see like this function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } "data" is what i wanted exactly but it cannot be binded to actiongrid which is the subgrid Thanx in advance for help

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >