Search Results

Search found 5734 results on 230 pages for 'forward declarations'.

Page 152/230 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Vim + OmniCppComplete: Completing on Class Members which are STL containers

    - by Robert S. Barnes
    Completion on class members which are STL containers is failing. Completion on local objects which are STL containers works fine. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicppcomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members which are STL containers? Edit I found that completion on members which are STL containers works if I make the follow modifications to the header: // foo.h #include <string> using std::string; class foo { public: void set_str(const string &); string get_str_reverse( void ); private: string str; }; Basically, if I add using std::string; and then remove the std:: name space qualifier from the string str; member and regenerate the tags file then OmniCppComplete is able to do completion on str.. It doesn't seem to matter whether or not I have let OmniCpp_DefaultNamespaces = ["std", "_GLIBCXX_STD"] set in the .vimrc. The problem is that putting using declarations in header files seems like a big no-no, so I'm back to square one.

    Read the article

  • How to pass parameters to Apache Pivot's wtkx:include tag?

    - by Andrew Swan
    I need to create a reusable UI component that accepts a number of parameters (e.g. an image URL and some label text), similar to how JSP tags can accept parameters. The Pivot docs for the "wtkx:include" tag say: The tag allows a WTKX file to embed content defined in an external WTKX file as if it was defined in the source file itself. This is useful for ... defining reusable content templates I was hoping that I could define my component in a WTKX file using standard Pivot components (e.g. a TextInput) and pass it one or more parameters; for example my reusable template called "row.wtkx" might contain a row with an image and a text field, like this (where the ${xxx} bits are the parameters): <TablePane.Row xmlns="org.apache.pivot.wtk"> <ImageView image="@images/${image_url}" /> <TextInput text="${title}" /> </TablePane.Row> I could then reuse this component within a TablePane as follows: <rows> <TablePane.Row> <Label text="Painting"/> <Label text="Title"/> </TablePane.Row> <wtkx:include src="row.wtkx" image_url="mona_lisa.jpg" title="Mona Lisa"/> <wtkx:include src="row.wtkx" image_url="pearl_earring.jpg" title="Girl with a Pearl Earring"/> <wtkx:include src="row.wtkx" image_url="melting_clocks.jpg" title="Melting Clocks"/> </rows> I've made up the ${...} syntax myself just to show what I'm trying to do. Also, there could be other ways to pass the parameter values other than using attributes of the "wtkx:include" tag itself, e.g. pass a JSON-style map called say "args". The ability to pass parameters like this would make the include tag much more powerful, e.g. in my case allow me to eliminate a lot of duplication between the table row declarations. Or is "wtkx:include" not the right way to be doing this?

    Read the article

  • How to define an extern, C struct returning function in C++ using MSVC?

    - by DK
    The following source file will not compile with the MSVC compiler (v15.00.30729.01): /* stest.c */ #ifdef __cplusplus extern "C" { #endif struct Test; extern struct Test make_Test(int x); struct Test { int x; }; extern struct Test make_Test(int x) { struct Test r; r.x = x; return r; } #ifdef __cplusplus } #endif Compiling with cl /c /Tpstest.c produces the following error: stest.c(8) : error C2526: 'make_Test' : C linkage function cannot return C++ class 'Test' stest.c(6) : see declaration of 'Test' Compiling without /Tp (which tells cl to treat the file as C++) works fine. The file also compiles fine in DigitalMars C and GCC (from mingw) in both C and C++ modes. I also used -ansi -pedantic -Wall with GCC and it had no complaints. For reasons I will go into below, we need to compile this file as C++ for MSVC (not for the others), but with functions being compiled as C. In essence, we want a normal C compiler... except for about six lines. Is there a switch or attribute or something I can add that will allow this to work? The code in question (though not the above; that's just a reduced example) is being produced by a code generator. As part of this, we need to be able to generate floating point nans and infinities as constants (long story), meaning we have to compile with MSVC in C++ mode in order to actually do this. We only found one solution that works, and it only works in C++ mode. We're wrapping the code in extern "C" {...} because we want to control the mangling and calling convention so that we can interface with existing C code. ... also because I trust C++ compilers about as far as I could throw a smallish department store. I also tried wrapping just the reinterpret_cast line in extern "C++" {...}, but of course that doesn't work. Pity. There is a potential solution I found which requires reordering the declarations such that the full struct definition comes before the function foward decl., but this is very inconvenient due to the way the codegen is performed, so I'd really like to avoid having to go down that road if I can.

    Read the article

  • Delphi Unit local variables - how to make each instance unique?

    - by Justin
    Ok, this, I'm sure is something simple that is easy to do. The problem : I've inherited scary spaghetti code and am slowly trying to better it when new features need adding - generally when a refactor makes adding the new feature neater. I've got a bunch of code I'm packing into a single unit which, in different places in the application, controls the same physical thing in the outside world. The control appears in several places in the application and operates slightly differently in each instance. What I've done is to create a unit with all of the features I need which I can simply drop, as a frame, into each form that requires it. Each form then uses the unit's interface methods to customise the behaviour for each instance. The problem within the problem : In the unit in question (the frame) I have a variable declared in the IMPLEMENTATION section - local to the unit. I also have a procedure, declared in the TYPE section which takes an argument and assigns that argument to the local variable in question - each form passes a unique variable to each instance of the frame/unit. What I want it to do is for each instance of the frame to keep its own version of that variable, different from the others, and use that to define how it operates. What seems to be happening, however, is that all instances are using the same value, even if I explicitly pass each instance a different variable. ie: Unit FlexibleUnit; interface uses //the uses stuff type TFlexibleUnit=class(TFrame) //declarations including procedure makeThisInstanceX(passMeTheVar:integer); private // public // end; implementation uses //the uses var myLocalVar; procedure makeThisInstanceX(passMeTheVar:integer); begin myLocalVar:=passMeTheVar; end; //other procedures using myLocalVar //etc to the end; Now somewhere in another Form I've dropped this Frame onto the Design pane, sometimes two of these frames on one Form, and have it declared in the proper places, etc. Each is unique in that : ThisFlexibleUnit : TFlexibleUnit; ThatFlexibleUnit : TFlexibleUnit; and when I do a: ThisFlexibleUnit.makeThisInstanceX(var1); //want to behave in way "var1" ThatFlexibleUnit.makeThisInstanceX(var2); //want to behave in way "var2" it seems that they both share the same variable "myLocalVar". Am I doing this wrong, in principle? If this is the correct method then it's a matter of debugging what I have (which is too huge to post) but if this is not correct in principle then is there a way to do what I am suggesting? Thanks in advance, Stack Overflow - you guys (and gals!) are legendary.

    Read the article

  • Renaming nodes and values with xslt

    - by T.K.
    Hello world, I'm new to xslt, and have a task that I'm not really sure where to go with. I want to rename nodes, but maintain the format all node declarations. In the actual context I'll be applying this to, I'll be doing a series of renames like this, but for the sake of brevity, the sample I've written up only involves renaming one node. I am using XSL 1.0. Input: <variables> <var> <RENAME> a </RENAME> </var> <var RENAME='b'/> <var> <DO_NOT_TOUCH> c </DO_NOT_TOUCH> </var> <var DO_NOT_TOUCH='d'/> </variables> Desired Output: <variables> <var> <DONE> a </DONE> </var> <var DONE='b'/> <var> <DO_NOT_TOUCH> c </DO_NOT_TOUCH> </var> <var DO_NOT_TOUCH='d'/> </variables> My xslt: <xsl:template match="RENAME"> <RENAMED> <xsl:apply-templates select="@*|node()"/> </RENAMED> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> Current Output <variables> <var> <RENAMED> a </RENAMED> </var> <var RENAME="b"> </var> <var> <DO_NOT_TOUCH> c </DO_NOT_TOUCH> </var> <var DO_NOT_TOUCH="d"> </var> </variables>

    Read the article

  • C++ Class Access Specifier Verbosity

    - by PolyTex
    A "traditional" C++ class (just some random declarations) might resemble the following: class Foo { public: Foo(); explicit Foo(const std::string&); ~Foo(); enum FooState { Idle, Busy, Unknown }; FooState GetState() const; bool GetBar() const; void SetBaz(int); private: struct FooPartialImpl; void HelperFunction1(); void HelperFunction2(); void HelperFunction3(); FooPartialImpl* m_impl; // smart ptr FooState m_state; bool m_bar; int m_baz; }; I always found this type of access level specification ugly and difficult to follow if the original programmer didn't organize his "access regions" neatly. Taking a look at the same snippet in a Java/C# style, we get: class Foo { public: Foo(); public: explicit Foo(const std::string&); public: ~Foo(); public: enum FooState { Idle, Busy, Unknown }; public: FooState GetState() const; public: bool GetBar() const; public: void SetBaz(int); private: struct FooPartialImpl; private: void HelperFunction1(); private: void HelperFunction2(); private: void HelperFunction3(); private: FooPartialImpl* m_impl; // smart ptr private: FooState m_state; private: bool m_bar; private: int m_baz; }; In my opinion, this is much easier to read in a header because the access specifier is right next to the target, and not a bunch of lines away. I found this especially true when working with header-only template code that wasn't separated into the usual "*.hpp/*.inl" pair. In that scenario, the size of the function implementations overpowered this small but important information. My question is simple and stems from the fact that I've never seen anyone else actively do this in their C++ code. Assuming that I don't have a "Class View" capable IDE, are there any obvious drawbacks to using this level of verbosity? Any other style recommendations are welcome!

    Read the article

  • Problem separating C++ code in header, inline functions and code.

    - by YuppieNetworking
    Hello all, I have the simplest code that I want to separate in three files: Header file: class and struct declarations. No implementations at all. Inline functions file: implementation of inline methods in header. Code file: normal C++ code for more complicated implementations. When I was about to implement an operator[] method, I couldn't manage to compile it. Here is a minimal example that shows the same problem: Header (myclass.h): #ifndef _MYCLASS_H_ #define _MYCLASS_H_ class MyClass { public: MyClass(const int n); virtual ~MyClass(); double& operator[](const int i); double operator[](const int i) const; void someBigMethod(); private: double* arr; }; #endif /* _MYCLASS_H_ */ Inline functions (myclass-inl.h): #include "myclass.h" inline double& MyClass::operator[](const int i) { return arr[i]; } inline double MyClass::operator[](const int i) const { return arr[i]; } Code (myclass.cpp): #include "myclass.h" #include "myclass-inl.h" #include <iostream> inline MyClass::MyClass(const int n) { arr = new double[n]; } inline MyClass::~MyClass() { delete[] arr; } void MyClass::someBigMethod() { std::cout << "Hello big method that is not inlined" << std::endl; } And finally, a main to test it all: #include "myclass.h" #include <iostream> using namespace std; int main(int argc, char *argv[]) { MyClass m(123); double x = m[1]; m[1] = 1234; cout << "m[1]=" << m[1] << endl; x = x + 1; return 0; } void nothing() { cout << "hello world" << endl; } When I compile it, it says: main.cpp:(.text+0x1b): undefined reference to 'MyClass::MyClass(int)' main.cpp:(.text+0x2f): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x49): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x65): undefined reference to 'MyClass::operator[](int)' However, when I move the main method to the MyClass.cpp file, it works. Could you guys help me spot the problem? Thank you.

    Read the article

  • What's the C strategy to "imitate" a C++ template ?

    - by Andrei Ciobanu
    After reading some examples on stackoverflow, and following some of the answers for my previous questions (1), I've eventually come with a "strategy" for this. I've come to this: 1) Have a declare section in the .h file. Here I will define the data-structure, and the accesing interface. Eg.: /** * LIST DECLARATION. (DOUBLE LINKED LIST) */ #define NM_TEMPLATE_DECLARE_LIST(type) \ typedef struct nm_list_elem_##type##_s { \ type data; \ struct nm_list_elem_##type##_s *next; \ struct nm_list_elem_##type##_s *prev; \ } nm_list_elem_##type ; \ typedef struct nm_list_##type##_s { \ unsigned int size; \ nm_list_elem_##type *head; \ nm_list_elem_##type *tail; \ int (*cmp)(const type e1, const type e2); \ } nm_list_##type ; \ \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)); \ \ (...other functions ...) 2) Wrap the functions in the interface inside MACROS: /** * LIST INTERFACE */ #define nm_list(type) \ nm_list_##type #define nm_list_elem(type) \ nm_list_elem_##type #define nm_list_new(type,cmp) \ nm_list_new_##type##_(cmp) #define nm_list_delete(type, list, dst) \ nm_list_delete_##type##_(list, dst) #define nm_list_ins_next(type,list, elem, data) \ nm_list_ins_next_##type##_(list, elem, data) (...others...) 3) Implement the functions: /** * LIST FUNCTION DEFINITIONS */ #define NM_TEMPLATE_DEFINE_LIST(type) \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)) \ {\ nm_list_##type *list = NULL; \ list = nm_alloc(sizeof(*list)); \ list->size = 0; \ list->head = NULL; \ list->tail = NULL; \ list->cmp = cmp; \ }\ void nm_list_delete_##type##_(nm_list_##type *list, \ void (*destructor)(nm_list_elem_##type elem)) \ { \ type data; \ while(nm_list_size(list)){ \ data = nm_list_rem_##type(list, tail); \ if(destructor){ \ destructor(data); \ } \ } \ nm_free(list); \ } \ (...others...) In order to use those constructs, I have to create two files (let's call them templates.c and templates.h) . In templates.h I will have to NM_TEMPLATE_DECLARE_LIST(int), NM_TEMPLATE_DECLARE_LIST(double) , while in templates.c I will need to NM_TEMPLATE_DEFINE_LIST(int) , NM_TEMPLATE_DEFINE_LIST(double) , in order to have the code behind a list of ints, doubles and so on, generated. By following this strategy I will have to keep all my "template" declarations in two files, and in the same time, I will need to include templates.h whenever I need the data structures. It's a very "centralized" solution. Do you know other strategy in order to "imitate" (at some point) templates in C++ ? Do you know a way to improve this strategy, in order to keep things in more decentralized manner, so that I won't need the two files: templates.c and templates.h ?

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • emacs: how do I use edebug on code that is defined in a macro?

    - by Cheeso
    I don't even know the proper terminology for this lisp syntax, so I don't know if the words I'm using to ask the question, make sense. But the question makes sense, I'm sure. So let me just show you. cc-mode (cc-fonts.el) has things called "matchers" which are bits of code that run to decide how to fontify a region of code. That sounds simple enough, but the matcher code is in a form I don't completely understand, with babckticks and comma-atsign and just comma and so on, and furthermore it is embedded in a c-lang-defcost, which itself is a macro. And I want to run edebug on that code. Look: (c-lang-defconst c-basic-matchers-after "Font lock matchers for various things that should be fontified after generic casts and declarations are fontified. Used on level 2 and higher." t `(;; Fontify the identifiers inside enum lists. (The enum type ;; name is handled by `c-simple-decl-matchers' or ;; `c-complex-decl-matchers' below. ,@(when (c-lang-const c-brace-id-list-kwds) `((,(c-make-font-lock-search-function (concat "\\<\\(" (c-make-keywords-re nil (c-lang-const c-brace-id-list-kwds)) "\\)\\>" ;; Disallow various common punctuation chars that can't come ;; before the '{' of the enum list, to avoid searching too far. "[^\]\[{}();,/#=]*" "{") '((c-font-lock-declarators limit t nil) (save-match-data (goto-char (match-end 0)) (c-put-char-property (1- (point)) 'c-type 'c-decl-id-start) (c-forward-syntactic-ws)) (goto-char (match-end 0))))))) I am reading up on lisp syntax to figure out what those things are and what to call them, but aside from that, how can I run edebug on the code that follows the comment that reads ;; Fontify the identifiers inside enum lists. ? I know how to run edebug on a defun - just invoke edebug-defun within the function's definition, and off I go. Is there a corresponding thing I need to do to edebug the cc-mode matcher code forms?

    Read the article

  • 'Scanner' does not name a type error in g++

    - by Max
    Hi. I'm trying to compile code in g++ and I get the following errors: In file included from scanner.hpp:8, from scanner.cpp:5: parser.hpp:14: error: ‘Scanner’ does not name a type parser.hpp:15: error: ‘Token’ does not name a type Here's my g++ command: g++ parser.cpp scanner.cpp -Wall Here's parser.hpp: #ifndef PARSER_HPP #define PARSER_HPP #include <string> #include <map> #include "scanner.hpp" using std::string; class Parser { // Member Variables private: Scanner lex; // Lexical analyzer Token look; // tracks the current lookahead token // Member Functions <some function declarations> }; #endif and here's scanner.hpp: #ifndef SCANNER_HPP #define SCANNER_HPP #include <iostream> #include <cctype> #include <string> #include <map> #include "parser.hpp" using std::string; using std::map; enum { // reserved words BOOL, ELSE, IF, TRUE, WHILE, DO, FALSE, INT, VOID, // punctuation and operators LPAREN, RPAREN, LBRACK, RBRACK, LBRACE, RBRACE, SEMI, COMMA, PLUS, MINUS, TIMES, DIV, MOD, AND, OR, NOT, IS, ADDR, EQ, NE, LT, GT, LE, GE, // symbolic constants NUM, ID, ENDFILE, ERROR }; class Token { public: int tag; int value; string lexeme; Token() {tag = 0;} Token(int t) {tag = t;} }; class Num : public Token { public: Num(int v) {tag = NUM; value = v;} }; class Word : public Token { public: Word() {tag = 0; lexeme = "default";} Word(int t, string l) {tag = t; lexeme = l;} }; class Scanner { private: int line; // which line the compiler is currently on int depth; // how deep in the parse tree the compiler is map<string,Word> words; // list of reserved words and used identifiers // Member Functions public: Scanner(); Token scan(); string printTag(int); friend class Parser; }; #endif anyone see the problem? I feel like I'm missing something incredibly obvious.

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • How should I smooth the transition between these two states in flex/flashbuilder

    - by Joshua
    I have an item in which has two states, best described as open and closed, and they look like this: and And what I'd like to do is is smooth the transition between one state and the other, effectively by interpolating between the two points in a smooth manner (sine) to move the footer/button-block and then fade in the pie chart. However this is apparently beyond me and after wrestling with my inability to do so for an hour+ I'm posting it here :D So my transition block looks as follows <s:transitions> <s:Transition id="TrayTrans" fromState="*" toState="*"> <s:Sequence> <s:Move duration="400" target="{footer}" interpolator="{Sine}"/> <s:Fade duration="300" targets="{body}"/> </s:Sequence> </s:Transition> <s:Transition> <s:Rotate duration="3000" /> </s:Transition> </s:transitions> where {body} refers to the pie chart and {footer} refers to the footer/button-block. However this doesn't work so I don't really know what to do... Additional information which may be beneficial: The body block is always of fixed height (perhaps of use for the Xby variables in some effects?). It needs to work in both directions. Oh and the Sine block is defined above in declarations just as <s:Sine id="Sine">. Additionally! How would I go about setting the pie chart to rotate continually using these transition blocks? (this would occur without the labels on) Or is that the wrong way to go about it as it's not a transition as such? The effect I'm after is one where the pie chart rotates slowly without labels prior to a selection of a button below, but on selection the rotation stops and labels appear... Thanks a lot in advance! And apologies on greyscale, but I can't really decide on a colour scheme. Any suggestions welcome.

    Read the article

  • C++ dynamic type construction and detection

    - by KneLL
    There was an interesting problem in C++, but it concerns more likely architecture. There are many (10, 20, 40, etc) classes that describe some characteristics (mix-in classes), for exmaple: struct Base { virtual ~Base() {} }; struct A : virtual public Base { int size; }; struct B : virtual public Base { float x, y; }; struct C : virtual public Base { bool some_bool_state; }; struct D : virtual public Base { string str; } // .... Primary module declares and exports a function (for simplicity just function declarations without classes): // .h file void operate(Base *pBase); // .cpp file void operate(Base *pBase) { // .... } Any other module can has a code like this: #include "mixins.h" #include "primary.h" class obj1_t : public A, public C, public D {}; class obj2_t : public B, public D {}; // ... void Pass() { obj1_t obj1; obj2_t obj2; operate(&obj1); operate(&obj2); } The question is how to know what the real type of given object in operate() without dynamic_cast and any type information in classes (constants, etc)? Function operate() is used with big array of objects in small time periods and dynamic_cast is too slow for it. And I don't want to include constants (enum obj_type { ... }) because this is not OOP-way. // module operate.cpp void some_operate(Base *pBase) { processA(pBase); processB(pBase); } void processA(A *pA) { } void processB(B *pB) { } I cannot directly pass a pBase to these functions. And it's impossible to have all possible combinations of classes, because I can add new classes just by including new .h files. As one of solutions that comed to mind, in editor application I can use a composite container: struct CompositeObject { vector<Base *pBase> parts; }; But editor does not need a time optimization and can use dynamic_cast for parts to determine the exact type. In operate() I cannot use this solution. So, is it possible to not use a dynamic_cast and type information to solve this problem? Or maybe I should use another architecture?

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • ASP.NET MVC 2 Released

    - by ScottGu
    I’m happy to announce that the final release of ASP.NET MVC 2 is now available for VS 2008/Visual Web Developer 2008 Express with ASP.NET 3.5.  You can download and install it from the following locations: Download ASP.NET MVC 2 using the Microsoft Web Platform Installer Download ASP.NET MVC 2 from the Download Center The final release of VS 2010 and Visual Web Developer 2010 will have ASP.NET MVC 2 built-in – so you won’t need an additional install in order to use ASP.NET MVC 2 with them.  ASP.NET MVC 2 We shipped ASP.NET MVC 1 a little less than a year ago.  Since then, almost 1 million developers have downloaded and used the final release, and its popularity has steadily grown month over month. ASP.NET MVC 2 is the next significant update of ASP.NET MVC. It is a compatible update to ASP.NET MVC 1 – so all the knowledge, skills, code, and extensions you already have with ASP.NET MVC continue to work and apply going forward. Like the first release, we are also shipping the source code for ASP.NET MVC 2 under an OSI-compliant open-source license. ASP.NET MVC 2 can be installed side-by-side with ASP.NET MVC 1 (meaning you can have some apps built with V1 and others built with V2 on the same machine).  We have instructions on how to update your existing ASP.NET MVC 1 apps to use ASP.NET MVC 2 using VS 2008 here.  Note that VS 2010 has an automated upgrade wizard that can automatically migrate your existing ASP.NET MVC 1 applications to ASP.NET MVC 2 for you. ASP.NET MVC 2 Features ASP.NET MVC 2 adds a bunch of new capabilities and features.  I’ve started a blog series about some of the new features, and will be covering them in more depth in the weeks ahead.  Some of the new features and capabilities include: New Strongly Typed HTML Helpers Enhanced Model Validation support across both server and client Auto-Scaffold UI Helpers with Template Customization Support for splitting up large applications into “Areas” Asynchronous Controllers support that enables long running tasks in parallel Support for rendering sub-sections of a page/site using Html.RenderAction Lots of new helper functions, utilities, and API enhancements Improved Visual Studio tooling support You can learn more about these features in the “What’s New in ASP.NET MVC 2” document on the www.asp.net/mvc web-site.  We are going to be posting a lot of new tutorials and videos shortly on www.asp.net/mvc that cover all the features in ASP.NET MVC 2 release.  We will also post an updated end-to-end tutorial built entirely with ASP.NET MVC 2 (much like the NerdDinner tutorial that I wrote that covers ASP.NET MVC 1).  Summary The ASP.NET MVC team delivered regular V2 preview releases over the last year to get feedback on the feature set.  I’d like to say a big thank you to everyone who tried out the previews and sent us suggestions/feedback/bug reports.  We hope you like the final release! Scott

    Read the article

  • An Open Letter from Lyle Ekdahl, Group Vice President and General Manager, Oracle's JD Edwards

    - by Brian Dayton
    From Lyle Ekdahl, Group Vice President and General Manager, Oracle's JD Edwards As you may have heard, we recently announced some changes to the way Oracle will offer licensing of technology products with JD Edwards EnterpriseOne. Specifically, we have withdrawn from new sales the product known as JD Edwards EnterpriseOne Technology Foundation ("Blue Stack"). Our motivation for this change is simply to streamline licensing for our customers. Going forward, customers will license Oracle products from Oracle and IBM products from IBM. Customers who are currently licensed for Technology Foundation will continue to receive support--unchanged--through September 30, 2016. This announcement affects how customers license these IBM products; it does not affect Oracle's certification roadmap for IBM products with JD Edwards EnterpriseOne. Customers who are currently running their JD Edwards EnterpriseOne infrastructure using IBM platform components can continue to do so regardless of whether they license these components via Technology Foundation or directly from IBM. New customers choosing to run JD Edwards EnterpriseOne on IBM technology should license JD Edwards EnterpriseOne Core Tools from Oracle while licensing Infrastructure and any licenses of IBM products from IBM. For more information about this announcement, customers should refer to My Oracle Support article 1232453.1 Questions included in the "Frequently Asked Questions" document on My Oracle Support: Is Oracle dropping support for IBM DB2 and IBM WebSphere with JD Edwards EnterpriseOne? No. This announcement affects how customers license these IBM products; it does not affect Oracle's certification roadmap for these products. The JD Edwards EnterpriseOne matrix of supported databases, web servers, and portals remains unchanged, including planned support for IBM DB2, IBM WebSphere Application Server, and IBM WebSphere Portal. Customers who are currently running their JD Edwards EnterpriseOne infrastructure using IBM platform components can continue to do so regardless of whether they license these components via Technology Foundation or directly from IBM. As always, the timing and versions of such third-party certifications remain at Oracle's discretion. Does this announcement mean that Oracle is withdrawing support for JD Edwards EnterpriseOne on the IBM i platform? Absolutely not. JD Edwards EnterpriseOne support on the IBM i platform remains unchanged. This announcement simply states that customers will acquire Oracle products from Oracle and IBM products from IBM. In fact, as evidenced by the recent "IBM i Solution Edition for JD Edwards" offering, IBM and the JD Edwards product teams continue to innovate and offer attractive, cost-competitive solutions to the ERP marketplace. For more information about this offering see: http://www-03.ibm.com/systems/i/advantages/oracle/. I hope this clarifies any concerns. Let me know if you have any additional questions or concerns. -Lyle

    Read the article

  • Slides of my HOL on MySQL Cluster

    - by user13819847
    Hi!Thanks everyone who attended my hands-on lab on MySQL Cluster at MySQL Connect last Saturday.The following are the links for the slides, the HOL instructions, and the code examples.I'll try to summarize my HOL below.Aim of the HOL was to help attendees to familiarize with MySQL Cluster. In particular, by learning: the basics of MySQL Cluster Architecture the basics of MySQL Cluster Configuration and Administration how to start a new Cluster for evaluation purposes and how to connect to it We started by introducing MySQL Cluster. MySQL Cluster is a proven technology that today is successfully servicing the most performance-intensive workloads. MySQL Cluster is deployed across telecom networks and is powering mission-critical web applications. Without trading off use of commodity hardware, transactional consistency and use of complex queries, MySQL Cluster provides: Web Scalability (web-scale performance on both reads and writes) Carrier Grade Availability (99.999%) Developer Agility (freedom to use SQL or NoSQL access methods) MySQL Cluster implements: an Auto-Sharding, Multi-Master, Shared-nothing Architecture, where independent nodes can scale horizontally on commodity hardware with no shared disks, no shared memory, no single point of failure In the architecture of MySQL Cluster it is possible to find three types of nodes: management nodes: responsible for reading the configuration files, maintaining logs, and providing an interface to the administration of the entire cluster data nodes: where data and indexes are stored api nodes: provide the external connectivity (e.g. the NDB engine of the MySQL Server, APIs, Connectors) MySQL Cluster is recommended in the situations where: it is crucial to reduce service downtime, because this produces a heavy impact on business sharding the database to scale write performance higly impacts development of application (in MySQL Cluster the sharding is automatic and transparent to the application) there are real time needs there are unpredictable scalability demands it is important to have data-access flexibility (SQL & NoSQL) MySQL Cluster is available in two Editions: Community Edition (Open Source, freely downloadable from mysql.com) Carrier Grade Edition (Commercial Edition, can be downloaded from eDelivery for evaluation purposes) MySQL Carrier Grade Edition adds on the top of the Community Edition: Commercial Extensions (MySQL Cluster Manager, MySQL Enterprise Monitor, MySQL Cluster Installer) Oracle's Premium Support Services (largest team of MySQL experts backed by MySQL developers, forward compatible hot fixes, multi-language support, and more) We concluded talking about the MySQL Cluster vision: MySQL Cluster is the default database for anyone deploying rapidly evolving, realtime transactional services at web-scale, where downtime is simply not an option. From a practical point of view the HOL's steps were: MySQL Cluster installation start & monitoring of the MySQL Cluster processes client connection to the Management Server and to an SQL Node connection using the NoSQL NDB API and the Connector J In the hope that this blog post can help you get started with MySQL Cluster, I take the opportunity to thank you for the questions you made both during the HOL and at the MySQL Cluster booth. Slides are also on SlideShares: Santo Leto - MySQL Connect 2012 - Getting Started with Mysql Cluster Happy Clustering!

    Read the article

  • Need help fixing a strange path error in bash

    - by Evan
    UPDATE Ok, I found some errors in the path which I think I fixed, but now it's not running in any case - which for some reason I think is a step forward. Thanks for suggesting the following steps, here is their output: user@computer:~$ echo $PATH /usr/share/fsl/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/local/matlab/bin:/usr/local/VoxBo/bin:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron:/usr/lib/voxbo/bin:/home/user/folder:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11/:/usr/games/:/usr/local/matlab/bin:/usr/local/VoxBo/bin/:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron/ user@computer:~$ typeset -p PATH declare -x PATH="/usr/share/fsl/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/local/matlab/bin:/usr/local/VoxBo/bin:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron:/usr/lib/voxbo/bin:/home/user/folder:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11/:/usr/games/:/usr/local/matlab/bin:/usr/local/VoxBo/bin/:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron/" user@computer:~$ type app1 app1 is /home/user/folder/app1 user@computer:~$ type app2 app2 is /home/user/folder/app2 user@computer:~$ app1 bash: /home/user/folder/app1: No such file or directory user@computer:~$ app2 bash: /home/user/folder/app2: No such file or directory user@computer:~$ /home/user/folder/app1 bash: /home/user/folder/app1: No such file or directory user@computer:~$ /home/user/folder/app2 bash: /home/user/folder/app2: No such file or directory user@computer:~$ cd /home/user/folder user@computer:~/folder$ app1 bash: /home/user/folder/app1: No such file or directory user@computer:~/folder$ ./app1 bash: ./app1: No such file or directory user@computer:~/folder$ ./app2 bash: ./app2: No such file or directory user@computer:~/folder$ ls -l total 29384 -rwxr-xr-x 1 user user 14949776 2011-02-03 11:09 app1 -rwxr-xr-x 1 user user 15137300 2011-02-03 11:10 app2 user@computer:~/folder$ Thanks for everyone's input! ORIGINAL QUESTION I have two executable files I downloaded and am trying to add to the path. They are located in /home/user/folder and the specific files are /home/user/folder/app1 /home/user/folder/app2 Both app1 and app2 have the executable flag set to all (user, group, other). I can execute the files if I am in /home/user/folder and I execute these commands ./app1 ./app2 However I can't run them from elsewhere. I added this line to my .profile PATH="$PATH:/home/user/folder" and then sourced the path with . /home/user/.profile and I can see app1 and app2 when I use command completion (pressing tab). However here is what happens when I try to run app1 or app2 with the following commands (the following only shows 'app1' but the same is true of 'app2') user@comp:~$ app1 -bash: app1: command not found user@comp:~$ /home/user/folder/app1 -bash: app1: command not found user@comp:~/folder$ ./app1 (program runs) I'm stumped :), I must have missed something simple. Thanks for your help!!

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Predicting Likelihood of Click with Multiple Presentations

    - by Michel Adar
    When using predictive models to predict the likelihood of an ad or a banner to be clicked on it is common to ignore the fact that the same content may have been presented in the past to the same visitor. While the error may be small if the visitors do not often see repeated content, it may be very significant for sites where visitors come repeatedly. This is a well recognized problem that usually gets handled with presentation thresholds – do not present the same content more than 6 times. Observations and measurements of visitor behavior provide evidence that something better is needed. Observations For a specific visitor, during a single session, for a banner in a not too prominent space, the second presentation of the same content is more likely to be clicked on than the first presentation. The difference can be 30% to 100% higher likelihood for the second presentation when compared to the first. That is, for example, if the first presentation has an average click rate of 1%, the second presentation may have an average CTR of between 1.3% and 2%. After the second presentation the CTR stays more or less the same for a few more presentations. The number of presentations in this plateau seems to vary by the location of the content in the page and by the visual attraction of the content. After these few presentations the CTR starts decaying with a curve that is very well approximated by an exponential decay. For example, the 13th presentation may have 90% the likelihood of the 12th, and the 14th has 90% the likelihood of the 13th. The decay constant seems also to depend on the visibility of the content. Modeling Options Now that we know the empirical data, we can propose modeling techniques that will correctly predict the likelihood of a click. Use presentation number as an input to the predictive model Probably the most straight forward approach is to add the presentation number as an input to the predictive model. While this is certainly a simple solution, it carries with it several problems, among them: If the model learns on each case, repeated non-clicks for the same content will reinforce the belief of the model on the non-clicker disproportionately. That is, the weight of a person that does not click for 200 presentations of an offer may be the same as 100 other people that on average click on the second presentation. The effect of the presentation number is not a customer characteristic or a piece of contextual data about the interaction with the customer, but it is contextual data about the content presented. Models tend to underestimate the effect of the presentation number. For these reasons it is not advisable to use this approach when the average number of presentations of the same content to the same person is above 3, or when there are cases of having the presentation number be very large, in the tens or hundreds. Use presentation number as a partitioning attribute to the predictive model In this approach we essentially build a separate predictive model for each presentation number. This approach overcomes all of the problems in the previous approach, nevertheless, it can be applied only when the volume of data is large enough to have these very specific sub-models converge.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >